Banks and FinTechs: Enriching CX with Frictionless KYC, using AI.

A Forrester Report suggests that by 2030, banking would be invisible, connected, insights-driven, and purposeful. ‘Trust’ will be key for building the industry in the future.  

But how do banks and FinTechs enable an excellent customer experience (CX) that translates into “trust” when the onboarding experience itself is time-consuming and prone to error. The disengagement is clear from industry reports. 85% of corporates complained that the KYC experience was poor. Worse, 12% of corporate customers changed banks due to the “poor” customer experience.

Losing a customer is disastrous because the investment and effort that goes into the process are immense. Both KYC and Customer Lifecycle Management (CLM) are expensive and time-consuming. Banks could employ hundreds of staff for a high-risk client for procuring, analyzing, and validating documents. Thomson Reuters reports that, on average, banks use 307 employees for KYC. They spend $40 million (on average) to onboard new clients. When a customer defects due to poor customer engagement, it is a double whammy for the bank. It loses a client and has to work harder to cover the costs of the investment made. Industry reports indicate that new customer acquisition is five times costly than retaining an existing one. 

The same scenario is applicable for financial companies, which must be very careful about who they take in as clients. As a result, FinTechs struggle with greater demand for customer-centricity while fending competition from challengers. By investing in digital transformation initiatives like digital KYC, many challenger banks and FinTechs deliver exceptional CX outcomes and gain a foothold. 

Today Commercial Banks and FinTechs cannot afford to overlook regulatory measures, anti-terrorism, anti-money laundering (AML) standards, and legislation, violations of which would incur hefty fines and lead to reputational damage. The essence of KYC is to create a robust, transparent, and up-to-date profile of the customer. Banks and FinTechs investigate the source of their wealth, ownership of accounts, and how they manage their assets. Scandals like Wirecard have a domino effect, and so banks must flag off inconsistencies in real-time. As a result, banks and FinTechs have teamed up with digital transformation partners and are using emerging technologies AI, ML, and NLP to make their operations frictionless and customer-centric. 

Decoding existing paint-points and examining the need for a comprehensive data extraction tool to facilitate seamless KYC

Long time-to-revenue results in poor CX

Customer disengagement in the financial sector is common. Every year, financial companies lose revenue due to poor CX. Here the prime culprit for customer dissatisfaction is the prolonged time-to-revenue. High-risk clients average 90-120 days for KYC and onboarding. 

The two pain points are – poor data management and traditional methods for extracting data from documents (predominantly manual). Banking c-suite executives concede that poor data management arising due to silos and centralized architecture is responsible for high time-to-revenue.  

The rise of exhaust data 

Traditionally, KYC involved checks on data sources such as ownership documents, stakeholder documents, and the social security/ identity checks of every corporate employee. But today, the KYC/investigation is incomplete without verification of exhaust data. And in the evolving business landscape, it is exigent that FinTech and banks take exhaust data into account. 

Emerging technologies like AI, ML, and NLP make onboarding and Client Lifecycle Management (CLM) transparent and robust. With an end-to-end CLM solution, banks and FinTech can benefit from an API-first ecosystem that supports a managed-by-exception approach. An API-first ecosystem that supports an exception management approach is ideal for medium to low-risk clients. Data management tools that can extract data from complex documents and read like humans elevate the CX and save banks precious time and money. 

Sheer volume of paperwork prolongs onboarding. 

The amount of paperwork accompanying the onboarding and KYC process is humongous. When it comes to business or institutional accounts, banks must verify every person’s existence on the payroll. Apart from social security and identity checks, ultimate beneficial owners (UBO), and politically exposed persons (PEP), banks would have to cross-examine documents related to the organization’s structure. Verifying the ownership of the organization and the beneficiaries’ check adds to the complexity. After that, corroborating data with media checks and undertaking corporate analysis to develop a risk profile. With this kind of paperwork involved, KYC could take days. 

However, as this is a low-complexity task, it is profitable to invest in AI. Instead of employing teams to extract and verify data, banks and FinTechs can use data extraction and comprehension tools (powered with AI and enabled with machine learning) to accelerate paperwork processes. These tools digitize documents and extract data from structured and unstructured documents, and as the tool evolves with time, it detects and learns from document patterns. ML and NLP have that advantage over legacy systems – learning from iterations.   

Walking the tightrope (between compliance and quick TOI)

Over the years, the kind of regulatory framework that America has adopted to mitigate financial crimes has become highly complex. There are multiple checks at multiple levels, and enterprise-wide compliance is desired. Running a KYC engages both back and front office operations. With changing regulations, Banks and FinTechs must ensure that KYC policies and processes are up-to-date. Ensuring that customers meet their KYC obligations across jurisdictions is time-consuming and prolonged if done manually. Hence, an AI-enabled tool is needed to speed up processes and provide a 360-degree view and assess the risk exposure. 

In 2001, the Patriot Act came into existence to counter terrorist and money laundering activities. KYC became mandatory. In 2018, the U.S. Financial Crimes Enforcement Network (FinCEN) incorporated a new requirement for banks. They had to verify the “identity of natural persons of legal entity customers who own, control, and profit from companies when those organizations open accounts.” Hefty fines are levied if banks fail to execute due diligence as mandated.

If they are to rely on manual efforts alone, banks and FinTechs will find it challenging to ensure CX and quick time-to-revenue while adhering to regulations. To accelerate the pace of operations, they need tools that can parse through data with greater accuracy and reliance than the human brain. And also can learn from processes.  

No time for perpetual KYC as banks struggle with basic KYC

For most low and medium-risk customers, a straight-through-processing (STF) of data would be ideal. It reduces errors and time to revenue. Client Lifecycle Management is essential in today’s business environment as it involves ensuring customers are compliant through all stages and events in their lifecycle with their financial institution. That would include raking through exhaust data and traditional data from time to time to identify gaps. 

A powerful document extraction and comprehension tool is therefore no longer an option but a prime requirement.  

Document extraction and comprehension tool: how it works 

Document digitization: IDP begins with document digitization. Documents that are not in digital format are scanned. 

OCR: Next step is to read the text. OCR does the job. Many organizations use multiple OCRS for accuracy. 

NLP: Recognition of text follows the reading of the text. With NLP, words, sentences, and paragraphs are provided a meaning. NLP uses sentiment analysis, part of speech tagging, and making it easier to draw a relation. 

Classification of documents: Manual categorization of documents is another lengthy process that is tackled by IDP’s classification engine. Here machine learning (ML) tools are employed to recognize the kinds of documents and feed them to the system.  

Extraction: The penultimate step in IDP is data extraction. It consists of labeling all expected information within a document and extracting specific data elements like dates, names, numbers, etc.

Data Validation: Once the data has been extracted, it is combined and pre-defined validation rules based on AI check for accuracy and flag off errors, improving the quality of extracted data.     

Integration/Release: Once the data has been validated/checked, the documents and images are exported to business processes or workflows. 

The future is automation!

The future is automation. An enriched customer experience begins with automation. To win customer trust, commercial banks and FinTechs must ensure regulation compliance, improve CX, reduce the costs by incorporating AI and ML and ensure a swifter onboarding process. In the future, banks and FinTechs that improvise their digital transformation initiatives and enable faster and smoother onboarding and customer lifecycle management will facilitate deeper customer engagement. They would have gained an edge. Others would struggle in an unrelenting business landscape.

True, there is no single standard for KYC in the banking and FinTech industry. The industry is as vast as the number of players. There are challengers/start-ups and decades-old financial institutions that coexist. However, there is no question that data-driven KYC powered by AI, ML brings greater efficiency and drives customer satisfaction. 

A tool like Magic DeepSight™ is a one-stop solution for comprehensive data extraction, transformation, and delivery from a wide range of unstructured data sources. Going beyond data extraction, Magic DeepSight™ leverages AI, ML, and NLP technologies to drive exceptional results for banks and FinTechs. It is a complete solution as it integrates with other technologies such as API, RPA, smart contract, etc., to ensure frictionless KYC and onboarding. That is what the millennial banks and FinTechs need.  

Data on-boarding from unstructured sources: Bridging a critical gap in leveraging industry platforms

Ingesting Unstructured data into other Platforms

Industry specific Products / Platforms like the ERP for specific functions and processes have contributed immensely to enhancing efficiency and productivity. SI partners and end-users have focused on integrating these platforms with existing workflows through a combination of customization/configuring of these platforms and re-engineering existing workflows. Data Onboarding is a critical activity however it has been restricted to integrating the platforms with the existing ecosystem. A key element that is very often ignored is integrating Unstructured Data sources in the Data Onboarding process.

Most enterprise-grade products and platforms require a comprehensive utility that can extract and process a wide set of unstructured documents, data sources and ingest the output into a defined set of fields spread across several internal and third-party applications on behalf of their clients. You are likely extracting and ingesting this data manually today, but an automated utility could be a key differentiator that reduces time, effort and errors from this extraction process. 

Customers have often equated use of OCR technologies as solutions to these problems, however OCR suffers from quality and efficiency issues thereby requiring manual efforts. More importantly OCR extracts the entire document and not just the relevant Data Elements, thereby adding significant noise to the process. And finally, the task of ingesting this data into the relevant fields in the applications / platforms is still manual.

When it comes to widely used and “customizable” case management platforms for Fincrime applications, CRM platforms, or client on-boarding/KYC platforms, there is a vast universe of unstructured data that requires processing outside of the platform in order for the workflow to be useful. Automating manual extraction of critical data elements from unstructured sources with the help of an intelligent data ingestion utility enables users to repurpose critical resources tasked with repetitive offline data processing.

Your data ingestion utility can be a “bolt on” or a simple API that is exposed to your platform. While the document and data sets may vary, as long as there is a well-defined list of applications and fields that are required to be populated, there is a tremendous opportunity to accelerate every facet of client lifecycle management. There are several benefits to both “a point solution” which automates extraction of a well-defined document type/format as well as a more complex, machine learning based utility for a widely defined format of the same document type. 

Implementing Data Ingestion

An intelligent pre and post processing data ingestion can be implemented in 4 stages, each stage increasing in complexity and value extracted from your enterprise platform:

Stage 1 
  • Automate the extraction of standard templatized documents. This is beneficial for KYC and AML teams that are handling large volumes of standard identification documents or tax filings which do not vary significantly. 
Stage 2 
  • Manual identification and automated extraction of data elements. In this stage, end users of an enterprise platform can highlight and annotate critical data elements which an intelligent data extraction utility should be able to extract for ingestion into a target application or specified output format. 
Stage 3
  • Automated identification and extraction as a point solution for specific document types and formats.
Stage 4
  • Using stage 1-3 as a foundation, your platform may benefit from a generic automated utility which uses machine learning to fully automate extraction and increase flexibility of handling changing document formats. 

You may choose to trifurcate your unstructured document inputs into “simple, medium, and complex” tiers as you develop a cost-benefit analysis to test the outcomes of an automated extraction utility at each of the aforementioned stages. 

Key considerations for an effective Data Ingestion Utility:

  • Your partner should have the domain expertise to help identify the critical data elements that would be helpful to your business and end users 
  • Flexibility to handle new document types, add or subtract critical data elements and support your desired output formats in a cloud or on-premise environment of your choice
  • Scalability & Speed
  • Intelligent upfront classification of required documents that contain the critical data elements your end users are seeking
  • Thought leadership that supports you to consider the upstream and downstream connectivity of your business process

Revolutionizing the Investment Research Process with AI

Introduction

Investment research and analysis is beginning to look very different from what it did five years ago. While five years ago, the data deluge could have confounded asset management leaders, they now have a choice on how things could be done differently, thanks to AI and advanced analytics. Advanced analytics helps create value by eliminating biased decisions, enabling automatic processing of big data, and using alternative data sources to generate alpha. 

With multiple sources of data and emerging AI applications heralding a paradigm shift in the industry, portfolio managers and analysts who earlier used to manually sift through large volumes of unstructured data for investment research can now leverage the power of AI tools such as natural language processing and abstraction to simplify their task. Gathering insights from press releases, filing reports, financial statements, pitches and presentations, CSR disclosures, etc., is a herculean effort and consumes a significant amount of time. However, with AI-powered data extraction tools such as Magic DeepSight™, quick processing of large-scale data is possible and practical.

A tool like Magic DeepSight™  extracts relevant insights from existing data in a fraction of the time and capital compared to manual processing. However, the real value it delivers is by supplementing human intelligence with powerful insights, allowing analysts to direct their efforts towards high-value engagements.

Processing Unstructured Data Is Tough

There are multiple sources of information that front office analysts process daily, which are critical to developing an informed investment recommendation. Drawing insights from these sources of structured and unstructured data are challenging and complex. These include 10-K reports, the reasonably new ESG reports, investor reports, and various other company documents such as internal presentations and several PDFs. SEC EDGAR database makes it easy to access some of this data, but extracting this data from SEC EDGAR and identifying and then compiling relevant insights is still a tedious task. Unearthing insights from other unstructured documents also takes stupendous manual efforts due to the lack of any automation. 

10-K Analysis using AI

More detailed than a company’s annual report, the 10-K is a veritable powerhouse of information. Therefore, accurate analysis of a 10-K report would lead to a sounder understanding of the company. There are five clear-cut sections of a 10-K report – business, risk factors, selected financial data, management discussion and analysis (MD&A), financial statements, and supplementary data, all of which are packed with value for analysts investors alike. Due to the breadth and scope of this information, handling it is inevitably time-consuming. However, two sections that usually require more attention than the others to analyze due to the complexity and existence of possible hidden anomalies are the “Risk Factors” and the “MD&A”. The “Risk Factors” section outlines all current and potential risks posed to the company, usually in the order of importance. In contrast, the   “Management’s Discussion and Analysis Of Financial Condition And Results Of Operations” (MD&A) section is the company management’s perspective of the previous fiscal and future business plans’ performance.

As front-office analysts sift through multiple 10-K reports and other documents in a day, inconsistencies in analysis can inadvertently creep in. 

They can miss important information, especially in the MD&A and Risk Factors sections, as they have to analyze many areas to study and more reports in the queue. Even after extracting key insights, it takes time to compare the metrics in the disclosures to a company’s previous filings and against industry benchmarks. 

Second, there is the risk of human bias and error, where relevant information may be overlooked.  Invariably, even the best fund managers would succumb to the emotional and cognitive biases inherent in all of us, whether confirmation bias, bandwagon effect, loss aversion, or various other biases that behavioral psychologists have formally defined. Failure to consider these issues will lead to suboptimal decisions on asset-allocation and often does. 

Using AI to analyze the textual information in the disclosures made within 10-Ks can considerably cut through this lengthy process. Data extraction tools can parse through these chunks of texts to retrieve relevant insights. And a tool or platform custom-built for your enterprise and trained in the scope of your domain can deliver this information to your business applications directly. More documents can be processed in a shorter time frame, and armed with new insights, analysts can use their time to take a more in-depth learning’s untapped potential look into the company in question. Implementing an automated AI-Based system omits the human errors,  allowing investment strategies to be chosen that are significantly more objective, in both their formulation and execution. 

Analysing ESG Reports

Most public and some private companies today are rated on their environmental, social and governance (ESG) performance. Companies usually communicate their key ESG initiatives yearly on their websites as a PDF document. Stakeholders are studying ESG reports to assess a company’s ESG conduct. Investment decisions and brand perception can hinge on these ratings, and hence care has to be taken to process information carefully. In general, higher ESG ratings are positively correlated with valuation and profitability while negatively correlated with volatility. An increased preference for socially responsible investments is most prevalent in Gen Z and Millennial demographics. As they are set to make-up 72% of the global workforce by 2029, they are also exhibiting greater concern about organizations’ and employers’ stance on environmental and social issues. This is bringing under scrutiny a company’s value creation with respect to ethical obligations that impact the society it operates in.

Although, ESG reports are significant when it comes to a company’s evaluation by asset managers, investors, and analysts, as these reports and ratings are made available by third-party providers there is little to no uniformity in ESG reports unlike SEC filings. Providers tend to have their own methodology to determine the ratings. The format of an ESG report varies from provider to provider, making the process of interpreting and analyzing these reports complicated. For example, Bloomberg, a leading ESG data provider, covers 120 ESG indicators– from carbon emissions and climate change effects to executive compensation and rights of shareholders. Analysts spend research hours reading reports and managing complex analysis rubrics to evaluate these metrics, before making informed investment decisions.

However AI can make the entire process of extracting relevant insights easy. AI-powered data cleansing and Natural Language Processing (NLP) tools can extract concise information, such as key ESG initiatives from PDF documents and greatly reduce the text to learn from. NLP can also help consolidate reports into well defined bits of information which can then be plugged into analytical models including market risk assessments, as well as other information fields. 

How Technology Aids The Process

A data extraction tool like Magic DeepSight™ can quickly process large-scale data, and also parse through unstructured content and alternate data sources like web search trends, social media data, and website traffic. Magic DeepSight™ deploys cognitive technologies like NLP, NLG, and machine learning for this. Another advantage is its ability to plug the extracted information into relevant business applications, without  human intervention. 

About NLP and NLG

Natural Language Processing (NLP) understands and contextualises unstructured text into structured data. And Natural Language Generation (NLG) analyses this structured data and transforms it into legible and accessible text. Both processes are powered by machine learning and allow computers to generate text reports in natural human language. The result is comprehensive, machine-generated with insights that were previously invisible. But how reliable are they?

The machine learning approach that includes deep learning, builds intelligence from a vast number of corrective iterations. It is based on a self-correcting algorithm which is a continuous learning loop that gets more relevant and accurate the more it is implemented. NLP and AI-driven tools, when trained in the language of a specific business ecosystem, like asset management, can deliver valuable insights for every stakeholder across multiple software environments, and in appropriate fields.

Benefits of Using Magic DeepSight™ for Investment Research

  1. Reduced personnel effort

Magic DeepSight™ extracts, processes, and delivers relevant data directly into your business applications, saving analysts’ time and enterprises’ capital.

  1. Better decision-making

By freeing up upto 70% of the time invested in data extraction, tagging, and management, Magic DeepSight™ recasts the analysis process. It also supplements decision-making processes with ready insights. 

  1. Improved data-accuracy

Magic DeepSight™ validates the data at source. In doing so, it prevents errors and inefficiencies from  creeping downstream to other systems. 

  1. More revenue opportunities

With reduced manual workload and emergence of new insights, teams can focus on revenue generation and use the knowledge generated to build efficient and strategic frameworks. 

In Conclusion

Application of AI to the assiduous task of investment research can help analysts and portfolio managers assess metrics quickly, save time, energy and money and make better-informed decisions in due course. The time consumed by manual investment research, especially 10-K analysis, is a legacy problem for financial institutions. Coupled with emerging alternative data sources, such as ESG reports, investment research is more complicated today. After completing research, analysts are left with only a small percentage of their time for actual analysis and decision-making. 

A tool like Magic DeepSight™ facilitates the research process, improves predictions, investment decision-making, and creativity. It could effectively save about 46 hours of effort and speed up data extraction, tagging, and management by 70%. In doing so, it brings unique business value and supports better-informed investment decisions. However, despite AI’s transformative potential, relatively few investment professionals are currently using AI/big data techniques in their investment processes. While portfolio managers continue to rely on Excel and other necessary market data tools, the ability to harness AI’s untapped potential might just be the biggest differentiator for enterprises in the coming decade. 

To explore Magic DeepSight™ for your organization, write to us mail@magicfinserv.com or Request a Demo

From SEC EDGAR to Business Applications: Exploring an Alternative to Manual Data Extraction

The accessibility, accuracy, and wealth of data on the Securities and Exchange Commission’s EDGAR filing system make it an invaluable resource for investors, asset managers, and analysts alike. Cognitive technologies are changing the way financial institutions and individuals use data reservoirs like the SEC EDGAR. In a world that is being increasingly powered by data, artificial intelligence-based technologies for analytics and front-offices processes are barely optional anymore. Technology solutions are getting smarter, cheaper, and more accurate, implying that your team’s efforts can be directed towards high-value engagements and strategic implementations. 

DeepSight™ by Magic FinServ is a tailor-made solution for unstructured data-related challenges of the financial services industry. It uses cutting-edge technology to help you gain more accurate insights from unstructured and structured data, such as datasets from the EDGAR website, emails, contracts & documents, etc. saving over 70% of the existing costs.

AI-based solutions significantly enhance the ability to extract information and turn into knowledge from the massive data deluge, therefore providing enormous critical information to make decisions. This often translates to building higher competitiveness &, therefore, higher revenue.

What are the challenges of SEC’s EDGAR?

The SEC’s EDGAR presents vast amounts of data of public companies’ filed corporate documents, including quarterly and annual reports. While the reports are comprehensive and better accessible on public portals than before, filings such as daily filings and forms require much more diligent effort to peruse since it is tedious. There is also an increased margin of human error and bias when manually combing through data in such volumes. Quick availability of this public data also means that market competitors track and process it fast, in real-time. 

The numerous utilization possibilities of this data come with challenges in analysis and application. The issue of external data integration into fund management operations has been a legacy problem. The manual front-office processing of massive datasets is tedious and fragmented today but changing fast. Analysis of such large amounts of data is time-consuming and expensive; therefore, most analysts only utilize a handful of data points to guide their investment decisions, leaving untapped potential trapped in the other data points.  

After a lukewarm 1.1 percent organic net flow in the US every year between 2013 and 2018, cognitive technologies have now brought about a long-due intervention in the form of digital reinvention. Previously limited to applications in the IT industry, these technologies have been transforming capital management for a short while, but with remarkable impact. While their appearance in finance is novel, they present unique use cases to extract and manage data. 

How can technology help with the processing of EDGAR data used in the industry?

Data from EDGAR is being used across various business applications. Intelligent reporting, zero redundancies, and timely updates ultimately drive the quality of investment decisions. As investment decisions can be highly time-sensitive, especially during volatile economic conditions, extracting and tracking relevant information in real-time is crucial. 

Magic DeepSight™ is trained to extract relevant and precise information from SEC’s EDGAR, organize this data as per your requirements, deliver it in a Spreadsheet or via API’s or even better ingest it directly into your business applications. Since Magic DeepSight™ is built ground-up with AI technology, it has a built-in feedback loop, allowing you to train the system automatically with every use.

This focused information retrieval and precision analysis hastens and enhances the investment assessment process of a fund or an asset manager– a process that is fraught with tedious data analysis, complicated calculations, and bias when done solely manually.

Investment advice collaterals that are accurate, informative, and intelligible are part of the value derived through Magic DeepSight™. NLP and AI-driven tools, especially those trained in the language of your business ecosystem, can help you derive insights across multiple software environments in their appropriate fields. And all of it can be customized for the stakeholder in question. 

Meanwhile, tighter regulations on the market have also increased the costs of compliance. Technology offsets these costs with unerring and timely fulfillment of regulatory requirements. The SEC has company filings under the magnifying glass in recent exams, and hefty fines are being imposed on firms for not meeting the filing norms. Apart from pecuniary implications, fulfilling these requirements pertain to your firm’s health and the value perceived by your investors. 

What’s wrong with doing it manually?

Most of the front-office processes continue to be manual today, forcing front-office analysts slugging through large chunks of information to gain valuable insights. The information on EDGAR is structured uniformly, but the lengthy retrieval process negates the benefits of this organization of data. For example, if you wish to know acquisition-related information about a public company, you can access their Form S-4 and 8K filings easily on the SEC EDGAR website. But going through all the text to precisely find what is needed takes time. With Magic DeepSight™, you can automate this extraction process so analysts can focus on the next steps. 

And while a team of analysts is going through multiple datasets quickly, likely, relevant insights from the data that falls outside the few main parameters being considered are overlooked. And if such a hurdle arises with organized data, processing unstructured documents with large blocks of text, press releases, company websites, and Powerpoint presentations unquestionably takes much longer and is equally problematic. With Magic DeepSight™, you can overcome this blind spot. It can quickly process all values in a given dataset, and using NLP, it efficiently extracts meaningful information from unstructured data from multiple sources. Using this information, Magic DeepSight™ can provide you with new patterns and insights to complement your research team.

How does Magic DeepSight™ transform these processes?

While most data management solutions available in the market are industry-agnostic, Magic DeepSight™ is purpose-built for the financial domain enterprise. AI models, such as that of Magic DeepSight™ trained on financial markets’ datasets, can comprehend and extract the right data points. Built with an advanced domain-trained NLP engine, data is analyzed from an industry perspective and customized to your needs. Magic DeepSight™ is available on all cloud environments and on-premises if needed. Moreover, it integrates across your existing business applications without causing any disruptions to your current workflow.

DeepSight™ is built on a reliable stack of open source libraries, complimented by custom code, wherever needed, and trained to perfection by our team. This versatility is also what makes it easily scalable. Magic DeepSight™ can treat a wide range of information formats and select the most appropriate library for any dataset. By using Magic DeepSight™, Search—Download–Extraction of relevant information from the SEC EDGAR database can become easy and efficient. Information on forms such as disclosures on a 10K, including risk assessment, governance, conflict of interest, etc. is accurately summarized in a fraction of the time taken previously, freeing up space for faster and better-informed decision making. 

But it is more than just a data extraction tool. DeepSight™ is also integrated with other technologies such as RPA, smart contracts, workflow automation, and more– making it an end-to-end solution that adds value to each step of your business processes. 

Our team can also customize DeepSight™ to your enterprise’s requirements delivering you automated, standardized, and optimized information-driven processes across front-to-back offices.

What business value does Magic DeepSight™ provide?

  • It completely automates the process of wading through vast amounts of data to extract meaningful insights saving personnel time and effort, thus reducing costs up to 70%.
  • It becomes an asset to the research processes by employing NLP to extract meaningful information from an assortment of unstructured document types and formats, improving your firm’s overall data reservoir quality.
  • The band of your insights, made possible with AI, offer a richer perspective that was previously hidden, thus helping you drive higher revenues with better-informed investment decisions. 

Magic DeepSight™ digitally transforms your overall operations. Firms that adopt AI, data, and analytics will be better suited to optimize their business applications. 

To explore Magic DeepSight™ for your organization, write out to us mail@magicfinserv.com

Optimizing Business Processes for a Post COVID-19 World

COVID 19 and the associated social distancing and quarantine restrictions, has dictated new measures for business standards, forcing companies into a major overhaul in the way they work. Remote Working is just one key part of this change, this impacts workplaces and the global workforce significantly.

This cause-effect relationship is now at the forefront, fundamentally transforming existing business models, business practices, business processes, and supporting structures and technology. According to Gartner, “CIOs can play a key role in this process since digital technologies and capabilities influence every aspect of business models.”

Business process management ( BPM)  was the primary means for investment banks and hedge funds  to make internal workflows efficient. In investment banking, BPM  focused on the automation of operations management by identifying, modeling, analyzing, and subsequently improving business processes.

Most investment firms have some form of BPM for various processes. For instance, compliance processes appear to have some form of software automation in their workflows at most investment banks and hedge funds. This is because banking functions such as compliance, fraud, and risk management exert pressure to develop cost-effective processes. Wherever Automation was not possible,  manual labor-intensive functions were outsourced through KPO’s to comparatively cheaper South-East Asian destinations thereby reducing costs. With COVID-19’s social distancing norms levied, this traditional KPO model to handle Front, Middle, and Back Office processes is cracking up as it relies on several people working together. There is an urgent need to rethink these processes with a fresh vision and build intelligent systems that are remotely accessible, for handling all such processes like KYC, AML, Document Digitization, Data Extraction from Unstructured documents, Contract Management, Trade reconciliation Invoice Processing, Corporate Actions, etc.

Now more than ever, organizations need to embrace agility, flexibility, and transformation. As per KPMG, the modern enterprise must become agile and resilient to master disruption and maintain momentum. Optimizing the operations process can transform the business to support lean initiatives that lead to innovation—an aspect that can no longer be ignored. With the help of cross-functional domain experts, organizations can discover and subsequently eliminate inefficiencies in the operations and business processes by identifying the inconsistencies, redundancies, and gaps that can be streamlined.  Intelligent Workflow initiatives and goals align business improvement with business objectives and visibly reduce the probability of negative ROI and impact on projects and initiatives.

Using new technologies like AI and Machine Learning, organizations can quickly adapt and improve with precision and gain the multi-layered visibility needed to drive change and reach strategic goals across an enterprise. The proper use of Artificial Intelligence can solve business case problems and relieve enterprises from various technology or data chokes. AI techniques can help traditional software perform tasks better over time, thus empowering people to focus their time on complex and highly strategic tasks.

Best-Practices for Adoption of AI-Based BPM Solutions

Before moving into AI-based process automation, a crucial idea for investment banking business leaders to realize is that they need to shift their perspective of emerging technology opportunities. Many AI projects will be deployed before they return the desired result, 100 % of the time.

AI ventures require ample algorithmic tuning, so it can take several months to reach a state of high precision and confidence. This is important because banks, in their core business processes, cannot jump into large AI projects and expect seamless functions across the board straightaway. Any large project would result in a temporary impediment to the specific business process or push it into a downtime before the AI project is complete. 

So bankers need to develop a mentality of try-test-learn-improve while considering AI to gain confidence in data science projects. Also, it is advisable to choose an AI service provider with extensive experience and knowledge of the domain, to achieve desired results. An investment firm should expect to have a prototype solution in the first iteration which they need to improve by incorporating user feedback to correct minor issues to achieve an MVP status. The smaller and shorter projects, that focus on improving a particular sub-process within the entire process workflow are better suited for investment firms. This approach allows small AI teams to develop and deploy projects much faster. Such projects are advisable since they bring a significant positive business impact, while still not hindering the current workflow and process.  

Such attitudinal changes are decisive shifts from the conventional approach to technology that investment banking leaders have taken. This is presumably not something firms can change overnight and requires careful preparation, planning, and a strategy to help the workforce have an incremental improvement approach to business processes. These fundamental shifts demand that leaders prepare, motivate, and equip their workforce to make a change. But leaders must first be prepared themselves before inculcating this approach in their organizations.

Our interactions with CXO’s in the investment banking industry indicate that process optimization applications of AI can bring a  disproportionate benefit in terms of operational efficiency,  sorely needed in these challenging times.  

Magic FinServ offers focussed process optimization solutions for the Financial Services Industry leveraging New Gen Technology such as AI, ML, across hedge funds, asset management, and Fintechs. This allows financial services institutions to translate business strategy and operational objectives into successful enterprise-level changes, thus positively impacting revenues and bottom-line growth. With the relevant domain knowledge of capital markets and technological prowess, our agile team builds customized turnkey solutions that can be deployed quickly and demonstrate returns as early as 2 weeks from the first deployment. Discover the transformation possibilities with our experts on AI solutions for hedge funds and asset managers. 

Write to us mail@magicfinserv.com to book a consultation.

Using AI for Contract Lifecycle Management

Contracting as an activity has been around, ever since the start of the service economy. But despite it being a well-used practice, very few companies have mastered the art of managing contracts efficiently or effectively.  According to a KPMG report, inefficient contracting leads to a loss of 5% to 40% of the value of a given deal in some cases. 

The main challenge facing companies in the financial services industry is the sheer volume of contracts that they have to keep track of; these contracts often lack uniformity and are hard to organize, maintain and update on a regular basis. Manual maintenance of contracts is not only difficult but also cumbersome and prone to multiple forms of errors.  Also, it poses the risk of missing important deadlines or missed scheduled follow-ups, as written in the contract and could potentially lead to expensive repercussions.

Contract management is a way to manage multifarious contracts from vendors, partners, customers, etc. so that data from these contracts can be easily identified, segregated, labeled and extracted to be used in various cases and also updated regularly. 

Recent technological advances in Artificial Intelligence (AI) and Machine Learning, are now helping companies resolve many of the contracting challenges by delivering efficient contract management as a seamless automated solution. 

Benefits of Using AI in the contract management lifecycle

Basic Search

AI can help in enhancing the searchability of the contracts including clauses, dates, notes, comments and, even metadata associated with it. The AI method used for this purpose is called natural language processing(NLP) and the extraction of metadata is done at a granular level to enable the user to search from the vast repository of contracts in an effective manner.

Example: This search function would be extremely useful for the relationship managers/chat-bots to answer any customer queries pertaining to a particular contract. 

Analysis and Diagnostic Search:  AI can be used to proactively identify expiry dates, renewal dates, follow-up dates or low KPI compliance, and then can be used to apply suggestive course of action or flag any alerts. Analytics can further be used to study and predict any kind of risks or non-compliance and therefore send a notification to relevant stakeholders for pending payments or negotiations.

Example: This can be effectively utilised for improving customer satisfaction as well as guide negotiations based on accessible information.

Cognitive Support: AI is highly sought for its predictive intelligence. AI’s predictive capabilities can be used to do an analysis of the existing contracts to understand contract terms & clauses. Its pattern recognition algorithms can identify points of parity, differentiations on pricing, geographic, products & services. Based on the predictive analysis, AI can provide suggestions for inclusion/exclusion of clauses, terms & conditions, etc when authoring new contracts. 

Example : AI systems may automatically predict and suggest clauses pertaining to NDA (non-disclosure agreement) based on the historical contracts that have been previously processed and the events associated with it.

Dynamic Contracts: Advanced AI can be used to build an adaptive dynamic contract. Based on the past data and by taking into account external factors such as market fluctuations, currency exchange, prices, labor rate, changes in laws and regulations, etc, AI algorithms can create a contract. Such a contract would require auditing by an expert but nonetheless would reduce the effort required to generate the contract.

Example: AI can be used to assess existing contracts for making them GDPR (General Data Protection Regulation) compliant. It will insert the relevant data privacy terms and conditions into the contract and subsequently notify the concerned stakeholder about the changes in the contract, so they can be verified.

Challenges in contract management with AI-ML

The use of AI and Machine Learning for contract management is highly promising but it is also challenged by few limitations. 

Machine Learning (ML) is only as effective as the training data that has been used to train the ML algorithms. Therefore, before any AI-ML application is put into practice, an exhaustive dataset of contracts must be developed and then classified, sorted, labeled, and retrieved based on the metadata. This would provide the base, as training data, for AI to build up and therefore put the ‘intelligence’ in the Contract Management process.

For the exhaustive dataset to be developed, all the contract data must be assimilated together. In many organizations, the contracts are still hard copies lying in cabinets. Approximately 10% of written agreements aren’t even traceable. Even when digitized contracts are available, for the AI machine to read these contract’s insights, they must first be in uniform contents. This not only requires scanning of all the documents but also the ability to extract the meaning of the content in the contracts. 

Overcoming the challenges

In order to make the contract portfolios AI-ready, the first step is to  digitize these contract documents. This can be done using OCR (optical character recognition). OCR reads the physical document as a human eye would read it and converts into digitized text which can easily be searched with ML formulas. While it may be too onerous to scan all historical contracts, this purpose can be accomplished by using a CMS (contract management software), which is capable of converting the documents into machine readable filed, thus making a significant data pool. Then AI, can be used to use this data to gain relevant insights. When AI algorithms access huge pools of data, its ability to decipher patterns and provide insights becomes much stronger. The predictive insights can be achieved by incorporating NLP (natural language processing). NLP allows contact groups to identify when contracts have deviated from defined standards. This makes the approval process, negotiation process much faster when the stakeholder is aware of the current contract version deviation from standards. NLP is also used in reporting risk based on language meaning rather than just string matching. For example, identifying those contracts which are about to expire and starting their renewal process.

Conclusion

Potentially, AI in contract management will change the contract management lifecycle to uplevel the strategic role of the contract managers, which would position them in a superior spot while negotiating terms of contracts. It can also help tremendously in strategic planning, risk management, supplier search, and final selections. Thus enhancing the efficiency and effectiveness of category managers. AI innovation continues to play a vital role when contract managers educate themselves and ensure that their contract processes are fully digitized and AI-ready.

Get started with Artificial Intelligence by booking a workshop with us today!

The Underlying Process of Predictive Analysis

Predictive Analysis – What it is?

Whenever you hear the term “Predictive Analysis”, a question pop-ups in mind “Can we predict the future?”. The answer is “no” and the future is still a beautiful mystery as it should be. However, the predictive analysis does forecast the possibility of a happening in the future with an acceptable percentage of deviation from the result. In business terms, predictive analysis is used to examine the historical data and interpret the risk and opportunities for the business by recognizing the trends and behavioral patterns.

Predictive analysis is one of the three forms of data analysis. The other two being descriptive analysis and Prescriptive analysis. The descriptive analysis examines the historical data and evaluates the current metrics to tell if business doing good; predictive analysis predicts the future trends and prescriptive analysis provides a viable solution to a problem and its impact on the future. In simpler words, descriptive analysis is used to identify the problem/scenario, predictive analysis is used to define the likelihood of the problem/scenario and why it could happen; prescriptive analysis is used to understand various solutions/consequences to the problem/scenario for the betterment of the business.

predictive analysis

Predictive Analysis process

The predictive analysis uses multiple variables to define the likelihood of a future event with an acceptable level of reliability. Let’s have a look at the underlying process of Predictive Analysis:

Requirement – Identify what needs to be achieved

This is the pre-step in the process where it is identified what needs to be achieved (requirement) as it paves the ways for data exploration which is the building block of predictive analysis. This explains what a business needs to do more vis-à-vis what is being done today to become more valuable and enhance the brand value. This step defines which type of data is required for the analysis. The analyst could take the help of domain experts to determine the data and its sources.

  1. Clearly state the requirement, goals, and objective.
  2. Identify the constraints and restrictions.
  3. Identify the data set and scope.

Data Collection – Ask the right question

Once you know the sources, the next step comes in to collect the data. One must ask the right questions to collect the data. E.g. to build a predictive model for stock analysis, historic data must contain the prices, volume, etc. but one must also pay attention to how useful the social network analysis would be to discover the behavioral and sentiment patterns.

Data Cleaning – Ensure Consistency

Data could be fetched from multiple sources. Before it could be used, this data needs to be normalized into a consistent format. Normally data cleaning includes –

  1. Normalization
    • a. Convert into a consistent format
  2. Selection
    • a. Search for outliers and anomalies
  3. Pre-Processing
    • a. Search for relationships between variables
    • b. Generalize the data to form group and/or structures
  4. Transformation
    • a. Fill in the missing value

Data Cleaning removes errors and ensures consistency of data. If the data is of high quality, clean and relevant, the results will be proper. This is, in fact, the case of “Garbage In – Garbage out”. Data cleaning can support better analytics as well as all-round business intelligence which can facilitate better decision making and execution.

Data collection and Cleaning as described above needs to ask the right questions. Volume and Variety are two words describing the data collection results, however, there is another important thing which one must focus on is “Data Velocity”. Data is not only required to be quickly acquired but needs to be processed at a good rate for faster results. Some data may have a limited lifetime and will not solve the purpose for a long time and any delay in processing would require acquiring new data.

Analyze the data – Use the correct model

Once we have data, we need to analyze the data to find the hidden patterns and forecast the result. The data should be structured in a way to recognize the patterns to identify future trends.

Predictive analytics encompasses a variety of statistical techniques from traditional methods e.g. data mining, statistics to advance methods like machine learning, artificial intelligence which analyze current and historical data to put a numerical value on the likelihood of a scenario. Traditional methods are normally used where the number of variables is manageable. AI/Machine Learning is used to tackle the situations where there are a large number of variables to be managed. Over the ages computing power of the organization has increased multi-fold which has led to the focus on machine learning and artificial intelligence.

Traditional Methods:

  1. Regression Techniques: Regression is a mathematical technique used to estimate the cause and effect relationship among variables.

In business, key performance indicators (KPIs) are the measure of business and regression techniques could be used to establish the relationship between KPI and variables e.g. economic parameters or internal parameters. Normally 2 types of regression are used to find the probability of occurrence of an event.

  1. Linear Regression
  2. Logistic Regression

A time series is a series of data points indexed or listed or graphed in time order.

Decision Tree

Decision Trees are used to solve classification problems. A Decision Tree determines the predictive value based on a series of questions and conditions.

Advanced Methods – Artificial Intelligence / Machine Learning

Special Purpose Libraries

Nowadays a lot of open frameworks or special purpose libraries are available which could be used to develop a model. Users can use these to perform mathematical computations and see data flow graphs. These libraries can handle everything from pattern recognition, image and video processing and can be run over a wide range of hardware. These libraries could help in

  1. Natural Language Processing (NLP). Natural Language refers to how humans communicate with each other in day to day activities. It could be in words, signs, e-data e.g. emails, social media activity, etc. NLP refers to analyzing this unstructured or semi-structured data.
  2. Computer Vision

Algorithms

Several algorithms which are used in Machine Learning include:

1. Random Forest

Random Forest is one of the popular machine learning algorithm Ensemble Methods. It uses a combination of several decision trees as a base and aggregates the result. These several decision trees use one or more distinct factors to predict the output.

2. Neural Networks (NN)

The approach from NN is to solve the problem in a similar way by machines as the human brain will do. NN is widely used in speech recognition, medical diagnosis, pattern recognition, spell checks, paraphrase detection, etc.

3. K-Means

K-Means is used to solve the clustering problem which finds a fixed number (k) of clusters in a set of data. It is an unsupervised learning algorithm that works itself and has no specific supervision.

Interpret result and decide

Once the data is extracted, cleaned and checked, its time to interpret the results. Predictive analytics has come along a long way and goes beyond suggesting the results/benefits from the predictions. It provides the decision-maker with an answer to the query “Why this will happen”.

Few use cases where predictive analysis could be useful for FinTech business

Compliance – Predictive analysis could be used to detect and prevent trading errors and system oversights. The data could be analyzed to monitor the behavioral pattern and prevent fraud. Predictive analytics in companies could help to conduct better internal audits, identify rules and regulations, improve the accuracy of audit selection thus reducing the fraudulent activities.

Risk Mitigation – Firms could monitor and analyze the operational data to detect the error-prone areas and reduce outages and avoid being late on events thus improving the efficiency.

Improving customer service – Customers have always been the center of business. Online reviews, sentiment analysis, social media data analysis could help the business to understand customer behavior and re-engineer their product with tailored offerings.

Being able to predict how customers, industries, markets, and the economy will behave in certain situations can be incredibly useful for the business. The success depends on choosing the right data set with quality data and defining good models where the algorithms explore the relationships between different data sets to identify the patterns and associations. However, FinTech firms have their own challenges in managing the data caused by data silos and incompatible systems. Data sets are becoming large and it is becoming difficult to analyze for the pattern and managing the risk & return.

Predictive Analysis Challenges

Data Quality / Inaccessible Data

Data Quality is still the foremost challenge faced by the predictive analyst. Poor data will lead to poor results. Good data will help to shape major decision making.

Data Volume / Variety / Velocity

Many problems in Predictive analytics belong to big data category. The volume of data generated by users can run in petabytes and it could challenge the existing computing power. With the increase in Internet penetration and autonomous data capturing, the velocity of data is also increasing at a faster rate. As this increases, traditional methods like regression models become unstable for analysis.

Correct Model

Defining a correct model could be a tricky task especially when much is expected from the model. It must be understood that the same model could be used for different purposes. Sometimes, it does not make sense to create one large complex model. Rather than one single model to cover it all, the model could consist of a large number of smaller models that together could deliver better understanding and predictions.

The right set of people

Data analytics is not a “one-man army” show. It requires a correct blend of domain knowledge with data science knowledge. Data Scientist should be able to ask the correct questions to domain experts in terms of what-if-analysis and domain experts should be able to verify the model with appropriate findings. This is where we at Magic FinServ could bring value to your business. At Magic FinServ we have the right blend of domain expertise as well as data science experts to deliver the intelligence and insights from the data using predictive analytics.

Magic FinServ – Value we bring using Predictive Analysis

Magic Finserv Offerings

Magic FinServ hence has designed a set of offerings specifically designed to solve the unstructured & semi-structured data problem for the financial services industry.

Market Information – Research reports, News, Business and Financial Journals & websites providing Market Information generate massive unstructured data. Magic FinServ provides products & services to tag meta data and extracts valuable and accurate information to help our clients make timely, accurate and informed decisions.

Trade – Trading generates structured data, however, there is huge potential to optimize operations and make automated decisions. Magic FinServ has created tools, using Machine Learning & NLP, to automate several process areas, like trade reconciliations, to help improve the quality of decision making and reduce effort. We estimate that almost 33% effort can be reduced in almost every business process in this space.

Reference data – Reference data is structured and standardized, however, it tends to generate several exceptions that require proactive management. Organizations spend millions every year to run reference data operations. Magic FinServ uses Machine Learning tools to help the operations team reduce the effort in exception management, improve the quality of decision making and create a clean audit trail.

Client/Employee data – Organizations often do not realize how much client sensitive data resides on desktops & laptops. Recent regulations like GDPR make it now binding to check this menace. Most of this data is semi-structured and resides in excels, word documents & PDFs. Magic FinServ offers products & services that help organizations identify the quantum of this risk and then take remedial actions.FacebookLinkedInTwitter

RPA vs Cognitive RPA – Journey of Automation

Evolution of RPA

IT outsourcing took-off in the early ’90s with broadening globalization driven primarily by labor arbitrage. This was followed by the BPO outsourcing wave in early 2000.

The initial wave of outsourcing delivered over 35% cost savings on an average but continued to stay inefficient due to low productivity & massive demand for constant training due to attrition.

As labor arbitrage became less lucrative with increasing wage & operational cost, automation looked to be a viable alternative for IT & BPO service providers to improve efficiency. This automation was mostly incremental. At the same time, high-cost locations had to compete against their low-cost counterparts and realized that the only way to stay ahead in this race was to reduce human effort.

Robotic Process Automation (RPA) was therefore born with the culmination of these two needs.

What is RPA?

RPA is a software that automates the high volume of repetitive manual tasks. RPA increases operational efficiency and productivity and reduces cost. RPA enables the businesses to configure their own software robots (RPA bots) who can work 24X7 with higher precision and accuracy.

The first generation of RPA started with Programmable RPA solutions, called Doers.

Programmable RPA tools are programmed to work with various systems via screen scraping and integration. It takes the input from other system and determines decisions to drive action. The most repetitive processes are automated by Programmable RPA.

However, Programmable RPA work with structured data and legacy systems. They are highly rule-based without any learning capabilities.

Cognitive Automation is an emerging field which is providing the solution to overcome the limitations of the first-generation RPA system. Cognitive automation is also called “Decision-makers” or “Intelligent Automation”.

Here is a nice diagram published by the Everest group that shows the power of AI/ML in a traditional RPA framework.

Cognitive automation uses artificial intelligence (AI) capabilities like optical character recognition (OCR) or natural language processing (NLP) along with RPA tools to provide end to end automation solutions. It deals with both structured and unstructured data including text-heavy reports. This is probabilistic but can learn the system behavior over time and provide the deterministic solution.

There is another type of RPA solution – “Self-learning solutions” called “Learners”.

Programmable RPA solutions need significant programming effort and technique to enable the interaction with other systems. Self-learning solutions program themselves.

There are various learning methods adopted by RPA tools:

  • It may use historical (when available) and current data, these tools can monitor employee activity over time to understand the tasks. They start completing them after they have gained enough confidence to complete the process.
  • Various tools are used to complete tasks as they are done in the manual ways. Tools learn the necessary activities under the tasks and start automating them. The tool’s capabilities are enhanced by feedback from the operation team and it increases its automation levels.
  • Increasing complexity in the business is the driving factor from rule-based processing to data-driven strategy. Cognitive solutions are helping the business to manage both known and unknown areas, take complex decisions and identify the risk.

As per HfS Research RPA Software and Services is expected to grow to $1.2 billion by 2021 at a compound annual growth rate of 36%. 

Chatbots, Human Agents, Agent assists tools, RPA Robots, Cognitive robots – RPA with ML and AI creates a smart digital workforce and unleash the power of digital transformation

The focus has shifted from efficiency to intelligence in business process operations.

Cognitive solutions are the future of automation…. and data is the key driving factor in this journey.

We, at MagicFinServ, have developed several solutions to help our clients make more out of structured & unstructured data. Our endeavor is to use modern technology stack & frameworks using Blockchain & Machine Learning to deliver higher value out of structured & unstructured data to Enterprise Data Management firms, FinTech & large Buy & sell-side corporations.

Understanding of data and domain is crucial in this process. MagicFinServ has built a strong domain-centric team who understands the complex data of the Capital Markets industry.

The innovative cognitive ecosystem of MagicFinServ is solving the real world problem.

Want to talk about our solution? Please contact us at https://www.magicfinserv.com/.

Corporate Actions: Beyond The Golden Copy

Corporate actions industry is making great strides towards automation. However, despite all the technology advancements a significant portion of the process of managing corporate actions data requires manual processing mainly due to the increasing complexity of corporate actions thanks to cross border trading made easier and local market nuances.

Another big reason why the Corporate Action industry has not achieved such a significant degree of automation lies in Corporate Actions as a back-office process which is normally seen as cost management not as revenue generator which hinders the securities firm to invest too much.

Corporate Actions processing could be divided into 3 parts:

  1. Capture of Corporate Action data
  2. Processing of Corporate Action data
  3. Dissemination of tailored Corporate Action data

Each of the 3 parts has its own challenge in its way. Capturing the data is the first step in the process where we are actually working for a “Golden Copy”. A golden copy of data is selecting the best possible value from variety of source. Generating a golden copy provides the first headache to securities firm. The data from issuers are normally transmitted in the form of press releases, prospectuses, and other free text format files e.g. PDF, HTML etc. The challenges for the securities firm lies in the translating these unstructured data into information and transmitting them to various stakeholders using the standards. These various stakeholders are none other than financial industry participants – custodians, sub custodians, brokers, prime brokers etc. Their primary aim is to capture the data from various sources and produce a golden copy for the investors. This golden copy is disseminated to various investors/intermediaries depending on the need e.g. an asset/investment manager could need the information as soon as possible to enable him to decide the investment strategy whereas a portfolio manager would require it to adjust the NAV end of day.

The information that is sent to various investors does not only include golden copy data or event data, it also includes data of their holdings and entitlements from the corporate actions. This information brings in an interpretation risk where the various stakeholders does not only depend on the custodian feeds but they rely on the local feeds which are more efficient in way of presenting the data which could not be standardized in global standards e.g. tax data. Failure to interpret corporate action information correctly may lead to suboptimal trading decisions by brokerage and fund management firms for clients or for proprietary positions.

The first and foremost challenge as explained above in Corporate Action processing lies in the capture of event announcement and creation of golden copy.  However, it is only the first step in a lifecycle of a Corporate Action. The more complex events which include various voluntary events e.g. tender offer, merger, rights offer, exchange offers etc. requires a lot of instructions/elections to be delivered for the event. This upward chain of communication is very complex where the elections are delivered in non-standard format via emails, phone and brings in a lot of risks. The more intermediaries in chain, the tighter would be deadline to respond back as each intermediary would set up its own deadline to process the election. The effect of corporate actions on share prices and trading activity is generally seen on important dates e.g. announcement date, ex-date, record date etc. Hence, the decision from an investor could change several times and the securities firm could receive multiple elections on the same positions. The other critical factor in election processing is the current holdings of the investor which needs to be up to date as the time of election or the processing could of election on wrong holdings could have adverse effects. The wrong holdings could be result of trading or lending activities which have not been updated in the books. Frequent reconciliation of holdings is a significant step to reduce this risk.

Capturing the data, creation of golden copy, distributing the data to different intermediaries and investors, processing of instructions for complex event does provide a lot of challenge however, the final frontier is still to be conquered where the payments of the corporate actions to be made and accounting has to be done.

Mandatory corporate actions such as dividend and interest payments, are straightforward, in that they only require a transfer of money from the bank account of the issuer to the bank account of the intermediaries and then to investor. For income from cross-border security holdings, the payment may operate less smoothly, and a delay may occur between the payment date and the time at which the cash reaches the beneficiary’s account.

For complex events which involve processing of shares, the process becomes more complex with fractions coming into picture. Sometimes, these fractions are paid as cash in lieu other times they need to be ignored. Addition/Ignoring of these fractions at the intermediary level could ultimately lead to different consolidated entitlement at its agent level. E.g. at an intermediary level, the consolidated holding is 300 shares with 3 investors each having 100 shares. In case the distribution ration of share is 1:3 where one share will be provided for every 3 shares, the consolidated position of intermediary entitled it for the benefit of 100 shares ((100+100+100)/3). However for each investor it resulted in 33.33 shares. The handling of fractions in such a case could have different implications all together

  1. Providing cash in lieu ⇒ Intermediary does not get any cash because of rounded holdings hence it has to sell the extra share and distribute cash to each investor.
  2. Rounding down/up ⇒ Intermediary in this example gets an extra share/less share depending on the holdings.

Other important aspect in payments / entitlements of Corporate Actions is taxation. An intermediary normally depends on local sources for tax information. Globalization of financial industry has provided an exponential rise in cross border trading activities. This means the more investors are impacted by the corporate action on a security. Taxation for an investor depends on its residency status and thus have the impact on the entitlements / payment of corporate actions.

Taxation on corporate actions is normally seen as a value added service and not all custodian are the tax agents for their investors. Taxation on corporate actions brings in lot of complexity in terms of:

  1. Types of taxation e.g. withholding tax etc.
  2. Part of entitlements on which tax needs to paid. Sometimes it could be cases that investor does not need to pay tax on complete or full entitlements e.g. Church tax in Germany, unfranked dividends in Australia etc.
  3. Residency of investors. Local investors are sometimes exempt from taxes but not the foreign investors.
  4. Tax credits where a part of tax is given back to investor.
  5. Double taxation treaty where the reclaims are made by investor as a part of double taxation treaty between the two countries.

Apart from calculation of tax, notification of these tax details in standard form is still a frontier unexplored for the organizations. Each intermediary tries to collate this information in their own and then send to the investors which have their own methods to interpret these messages.

By automating the various corporate actions functions, organizations can ensure long-term operational efficiency and effectiveness.

Corporate Actions and Client Servicing:

Each financial organization is looking for a new way to lure clients by providing various personalized services. These now include the range from corporate actions which the organization process. Timely, high-quality corporate actions information in the front office enables better-informed trading and investment analysis and decision-making; it helps support global investment strategies, reduces interpretation errors and benefits the monitoring of accounts and positions.

Finally, in a world where FinTech and automation are at the realm of every organization, in the near future we may witness a significant change in the way Corporate Actions are processed.

Trading System And Algorithmic Trading Strategies

blog

What is a Trading System?

A “trading system” creates a set of trading strategies which are applied to the given input data to generate entry and exit signals (buy/sell) in a trading platform.The traders/professionals who create the trading strategies to maximize the profit are called “Quants”. They use exhaustive quantitative research & analysis to build such efficient strategies by applying advanced statistical and mathematical models.

Algorithmic trading – Algorithmic trading uses various algorithms to create a trading strategy from trading ideas. The algorithms are backtested with historical data and then used with real market data to give the best return. The execution can be done manually or automated.

Quantitative trading – Advanced mathematical and statistical models are used in Quantitative trading creation and execution of trading strategies.

Automated trading – Automated trading involves automated order generation, submission, and the order execution process. However, they are not fully automated. Manual interventions are also required

HFT (high-frequency) trading – Trading strategies can be classified into low-frequency, medium-frequency and high-frequency strategies as per the holding time of the trades. High-Frequency Trading strategy holds the trading position for a fraction of a second time and executes the trading strategy automatically. Millions of trades are an executed per day in this model.

The most of the algo-trading is high-frequency trading (HFT), which attempts to capitalize on placing a large number of orders at very fast speeds across multiple markets and multiple decision parameters, based on pre-programmed instructions.

The other name of Algo Trading is black box trading.

The profit opportunities are higher in algo trading and it makes markets more liquid and makes trading more systematic by ruling out emotional human impacts on trading activities via sentiment analysis.

Algorithmic Trading Strategies

  • Momentum/Trend Following:
    Calculate 50 days SMA (Simple Moving Average)
    Calculate 200 days SMA
    Take a long position when the 50 days SMA is larger than or equal to 200 days SMA
    Take a short position when the 50 days SMA is smaller than 200 day SMA. This is one of the most common algorithmic trading strategies. This follows trends in moving averages, channel breakouts, price level movements and related technical indicators. Algo Trader assumes there is a trend in the market and use the statistics to determine if the trend will continue. It does not involve making any predictions or price forecasts. Trades are initiated based on the occurrence of desirable trends. The above-mentioned example of 50 and a 200-day moving average is a popular trend following strategy.
  • Arbitrage Opportunities:
    Buying a dual listed stock at a lower price in one market and simultaneously selling it at a higher price in another market offers the price differential as risk-free profit or arbitrage. The same operation can be replicated for stocks versus futures instruments, as price differentials do exists from time to time. Implementing an algorithm to identify such price differentials and placing the orders allows profitable opportunities in an efficient manner. Also, trading can be triggered by the acquisition of the issuer company. This is called corporate event. Such event driven strategy is applied when the trader is planning to invest based on the pricing inefficiencies that may happen during a corporate event (before or after). Bankruptcy, acquisition, merger, spin-offs etc could be the event that drives such kind of an investment strategy. These strategies can be market neutral and used by hedge fund and proprietary traders widely. Index Fund Rebalancing: Index fund has defined periods of rebalancing to bring their holdings to par with their respective benchmark indices. This creates profitable opportunities for algorithmic traders, who capitalize on expected trades that offer 20-80 basis points profits depending upon the number of stocks in the index fund, just prior to index fund rebalancing. Such trades are initiated via algorithmic trading systems for timely execution and best prices.
  • Machine Learning based
    The major aspect of ML is learning from past data and predict the outcome of an unseen or new situation. The human learns in the same fashion however machine can process a huge volume of data much faster than human and predict the outcome. This is the way trading system works. Traders handle a large volume of historical data, analyze them and predict the stock price to establish a various trading strategy. Hence machine learning has become one of the key elements in Algo Trading system. There are many types of ML techniques depending on the nature of target prediction: Regression, Classification, Clustering, Association. The other set of categorization is Supervised (Target prediction is known to the model) vs Un-Supervised (Target prediction is unknown to the model) techniques. Python is a powerful language which supports statistical computations and can work with ML algorithms easily. R is another powerful language for statistical analysis.
  • Mathematical Model Based Strategies:( source Investopedia)
    A lot of proven mathematical models, like the delta-neutral trading strategy, which allows trading on a combination of options and its underlying security, where trades are placed to offset positive and negative deltas so that the portfolio delta is maintained at zero.
    • Trading Range (Mean Reversion):
      Mean reversion strategy is based on the idea that the high and low prices of an asset are a temporary phenomenon that reverts to their mean value periodically. Identifying and defining a price range and implementing an algorithm based on that allows trades to be placed automatically when the price of asset breaks in and out of its defined range.
    • Volume-Weighted Average Price (VWAP):
      The volume weighted average price strategy breaks up a large order and releases dynamically determined smaller chunks of the order to the market using stock specific historical volume profiles. The aim is to execute the order close to the Volume Weighted Average Price (VWAP), thereby benefiting on average price.
    • Time Weighted Average Price (TWAP):
      Time-weighted average price strategy breaks up a large order and releases dynamically determined smaller chunks of the order to the market using evenly divided time slots between a start and end time. The aim is to execute the order close to the average price between the start and end times, thereby minimizing market impact.
    • Percentage of Volume (POV):
      Until the trade order is fully filled, this algorithm continues sending partial orders, according to the defined participation ratio and according to the volume traded in the markets. The related “steps strategy” sends orders at a user-defined percentage of market volumes and increases or decreases this participation rate when the stock price reaches user-defined levels.
    • Implementation Shortfall:
      The implementation shortfall strategy aims at minimizing the execution cost of an order by trading off the real-time market, thereby saving on the cost of the order and benefiting from the opportunity cost of delayed execution. The strategy will increase the targeted participation rate when the stock price moves favorably and decrease it when the stock price moves adversely.

Benefits of Algorithmic Trading

  • Trades are executed timely and instantly to get benefit from best possible price change
  • Reduced risk of manual errors in placing the trades order and achieved higher performance
  • Reduced transaction costs
  • Take the benefit of multiple market conditions
  • Backtest the algorithm, based on available historical and real-time data
  • Reduced possibility of human error based on emotional and psychological factors of traders

Algo-trading is used in many forms of trading and investment activities, including:

  • Mid to long term investors or buy side firms (pension funds, mutual funds, insurance companies) who purchase in stocks in large quantities but do not want to influence stocks prices with discrete, large-volume investments.
  • Short term traders and sell side participants (market makers, speculators, and arbitrageurs) benefit from automated trade execution; in addition, algo-trading aids in creating sufficient liquidity for sellers in the market.
  • Systematic traders (trend followers, pairs traders, hedge funds, etc.) find it much more efficient to program their trading rules and let the program trade automatically.
  • Algorithmic trading provides a more systematic approach to active trading than methods based on a human trader’s intuition or instinct.