Machine learning is one amongst those technologies that is invariably around us and that we might not even comprehend it. For instance, machine learning is employed to resolve issues like deciding if an email that we got is a spam or a genuine one, how cars can drive on their own, and what product someone is likely to purchase. Every day we tend to see these sorts of machine learning solutions in action. Machine learning is when we get a mail and automatically/mechanically scanned and marked for spam within the spam folder. For the past few years, Google, Tesla, and others have been building self-drive systems that may soon augment or replace the human driver. And data information giants like Google and Amazon can use your search history to predict which things you are looking to shop for and ensure you see ads for those things on each webpage you visit. All this useful and sometimes annoying behavior is the result of artificial intelligence.

This definition brings up the key component of machine realizing specifically, that the framework figures out how to tackle the issue from illustration information, instead of us composing a particular rationale. This is a noteworthy advancement from how most writing of computer programs is done. In more customary programming we deliberately examine the issue and compose code.

This code peruses in information and utilizes its predefined rationale to distinguish the right parts to execute, which at that point creates the right outcome.

Machine Learning and Conventional Programming

With conventional programming, we use code structs like– if statements, switch-case statements, and control loops implemented with — while, for and do statements. Every one of these announcements has tests that must be characterized. And the dynamic information, typical of machine learning issues can make defining these tests very troublesome. In contradiction to machine learning, we do not write this logic that produces the results. Instead, we gather the information we need and modify its format into a form which machine learning can use. We then pass this data to an algorithm. The algorithmic program analyses the data and creates a model that implements the solution to solve the problem based on the information and data.

Machine Learning: High-Level View

We initially start with lots of data, the data that contains patterns. That data gets inside machine learning logic and algorithm to find a pattern or several patterns. A predictive model is the outcome of the machine learning algorithm process. A model is typically the business logic that identifies the probable patterns with new data. The application is used to supply data to the model to know if the model identifies the known pattern with the new data. In the case that we took, new data could be data of more transactions. Probable patterns mean that a model should come up with predictive patterns to check if the transactions are fraudulent.

Machine Learning and FinTech
FinTech is one of the industries that could be hugely impacted by machine learning and can leverage machine learning technologies to get better predictions and risk analysis in finance applications. Following are five areas where machine learning could impact finance applications and so financial technologies can become smarter to take care of fraud detection, algorithmic trading or portfolio management.

Risk Management
Applying predictive analysis model to the huge amount of real-time data can help the machine learning algorithm to have command over numerous data points. The traditional method of risk management worked on analyzing structured data against some data rules which were very constrained to only structured data. But there is more than 90% of data that is unstructured. Deep learning technology can process unstructured data and does not really depend upon static information coming from loan applications or other financial reports. Predictive analysis can even foresee the loan applicant’s financial status that may be impacted by the current market trends.

Internet Banking Fraud
Another such example could be to detect internet banking fraud. If there is a continuous fraud happening with the fund’s transfer via internet banking and we have the complete data, we could find out the pattern involved. Through this, we can identify where are the loopholes or hack prone areas of the application. So, it’s all about patterns and predicting the results and future based on those patterns. Machine learning plays an important role in data mining, image processing, and language processing. It cannot always provide a correct analysis or cannot always provide an accurate result based on the analysis, but it gives a predictive model based on historical data to make decisions. The more data, the more the result-oriented predictions that can be made.

Sentiment Analysis
One of the areas where machine learning can play an important role is sentiment analysis or news analysis. The futuristic applications on machine learning can no longer depend upon the only data coming from trades and stock prices. As a legacy, the human intuition of financial activities is dependent upon trades and stocks data to discover new trends. The machine learning technology can be evolved to understand social media trends and other information/news trends to do sentiment or news analysis. The algorithms can computationally identify and categorize the opinions or thoughts expressed by the user to make predictive analysis. The more the data the more accurate would be the predictions.

Robo-Advisors
Robo-advisors are a kind of digital platforms to calibrate a financial portfolio. They provide planning services with least manual or human intervention. The users furnish details like their age, current income, and their financial status and expect from Robo-advisors to predict the kind of investment they can make, as per current and futuristic market trends to meet their retirement goals. The advisor processes this request by spreading the investments across financial instruments and asset classes to match the goals of the user. The system works on real-time modification in user’s goals and current market trends and does a predictive analysis to find the best match for the user’s investments. Robo-advisors may in future completely wipe out the human advisors who make money out of these services.

Security
The highest concern for banks and other financial institutions is the security of the user and user’s details, which if leaked could be prone to hacking and eventually resulting in financial losses. The traditional way in which the system works are providing a username and password to the user for secure access and in case of loss of password or recovery of the lost account, few security questions or mobile number validation is needed. Using AI, in the future, one can develop an anomaly detection application that might use biometric data like facial recognition, voice recognition or retina scan. This could only be possible by applying predictive analysis over a huge amount of biometric data to make more accurate predictions by applying repetitive models.

How Can Magic FinServ help?

Magic FinServ is aggressively working on visual analytics and artificial intelligence thereby leveraging the concept of machine learning and transforming the same in the perspective of technology to solve business problems like financial analysis, portfolio management, and risk management. Magic FinServ being a financial services provider can foresee the impact of machine learning and predictive analysis on financial services and financial technologies. The technology business unit of Magic uses technologies like Python, Big Data, Azure Cognitive Services to develop and provide innovative solutions. Data scientists and technical architects at Magic work hand in hand to provide consulting and developing financial technology services having a futuristic approach.

Evolution of RPA

IT outsourcing took-off in the early ’90s with broadening globalization driven primarily by labor arbitrage. This was followed by the BPO outsourcing wave in early 2000.

The initial wave of outsourcing delivered over 35% cost savings on an average but continued to stay inefficient due to low productivity & massive demand for constant training due to attrition.

As labor arbitrage became less lucrative with increasing wage & operational cost, automation looked to be a viable alternative for IT & BPO service providers to improve efficiency. This automation was mostly incremental. At the same time, high-cost locations had to compete against their low-cost counterparts and realized that the only way to stay ahead in this race was to reduce human effort.

Robotic Process Automation (RPA) was therefore born with the culmination of these two needs.

What is RPA?

RPA is a software that automates the high volume of repetitive manual tasks. RPA increases operational efficiency and productivity and reduces cost. RPA enables the businesses to configure their own software robots (RPA bots) who can work 24X7 with higher precision and accuracy.

The first generation of RPA started with Programmable RPA solutions, called Doers.

Programmable RPA tools are programmed to work with various systems via screen scraping and integration. It takes the input from other system and determines decisions to drive action. The most repetitive processes are automated by Programmable RPA.

However, Programmable RPA work with structured data and legacy systems. They are highly rule-based without any learning capabilities.

Cognitive Automation is an emerging field which is providing the solution to overcome the limitations of the first-generation RPA system. Cognitive automation is also called “Decision-makers” or “Intelligent Automation”.

Here is a nice diagram published by the Everest group that shows the power of AI/ML in a traditional RPA framework.

Cognitive automation uses artificial intelligence (AI) capabilities like optical character recognition (OCR) or natural language processing (NLP) along with RPA tools to provide end to end automation solutions. It deals with both structured and unstructured data including text-heavy reports. This is probabilistic but can learn the system behavior over time and provide the deterministic solution.

There is another type of RPA solution – “Self-learning solutions” called “Learners”.

Programmable RPA solutions need significant programming effort and technique to enable the interaction with other systems. Self-learning solutions program themselves.

There are various learning methods adopted by RPA tools:

  • It may use historical (when available) and current data, these tools can monitor employee activity over time to understand the tasks. They start completing them after they have gained enough confidence to complete the process.
  • Various tools are used to complete tasks as they are done in the manual ways. Tools learn the necessary activities under the tasks and start automating them. The tool’s capabilities are enhanced by feedback from the operation team and it increases its automation levels.
  • Increasing complexity in the business is the driving factor from rule-based processing to data-driven strategy. Cognitive solutions are helping the business to manage both known and unknown areas, take complex decisions and identify the risk.

As per HfS Research RPA Software and Services is expected to grow to $1.2 billion by 2021 at a compound annual growth rate of 36%. 

Chatbots, Human Agents, Agent assists tools, RPA Robots, Cognitive robots – RPA with ML and AI creates a smart digital workforce and unleash the power of digital transformation

The focus has shifted from efficiency to intelligence in business process operations.

Cognitive solutions are the future of automation…. and data is the key driving factor in this journey.

We, at MagicFinServ, have developed several solutions to help our clients make more out of structured & unstructured data. Our endeavor is to use modern technology stack & frameworks using Blockchain & Machine Learning to deliver higher value out of structured & unstructured data to Enterprise Data Management firms, FinTech & large Buy & sell-side corporations.

Understanding of data and domain is crucial in this process. MagicFinServ has built a strong domain-centric team who understands the complex data of the Capital Markets industry.

The innovative cognitive ecosystem of MagicFinServ is solving the real world problem.

Want to talk about our solution? Please contact us at https://www.magicfinserv.com/.

Before we actually get into details as to how digitalization has contributed to unstructured data, we really need to understand what is meant by the terms, Digitalization and Unstructured Data.

Digitization: is the process of converting information into a digital format. In this format, information is organized into discrete units of data (called bits) that can be separately addressed (usually in multiple-bit groups called bytes).

Unstructured Data: is information that either does not have a pre-defined data model or is not organized in a pre-defined manner. Unstructured information is typically text-heavy but may contain data such as dates, numbers, and facts as well. This results in irregularities and ambiguities that make it difficult to understand using traditional programs as compared to data stored in fielded form in databases or annotated (semantically tagged) in documents. (Source: https://en.wikipedia.org/wiki/Unstructured_data)

Now to establish connections between above two, I begin with a point, that every day there is new evolution happening in Technology space, and in addition to this desire to digitalize everything around us is also gaining momentum.

However, we haven’t thought that this process will solve our problem, or will lead to a bigger problem which will be common across all the current verticals and new verticals of the future.

Actually, if we do deep thinking around this then we will realize that instead of creating a solution for the digital world or digitized economy we have actually paved the path for making data as unstructured or for that matter Semi/Quasi structured, and this heap/pile of unstructured data is growing day by day.

Certain questions crop in our minds that what are various factors which are contributing to the unstructured data pile. Some of them are mentioned below:

  1. The rapid growth of the Internet leading to data explosion resulting in massive information generation.
  2. Data which is digitalized and given some structure to it.
  3. Free availability and easy access to various tools that help in the digitization of data.

The other crucial angle for unstructured data is how do we manage it.

Some insights and facts around unstructured data problem, that stresses it is a serious affair:

  • According to projections from Gartner, white-collar workers will spend anywhere from 30 to 40 percent of their time this year managing documents, up from 20 percent of their time in 1997
  • Merrill Lynch estimates that more than 85 percent of all business information exists as unstructured data – commonly appearing in e-mails, memos, notes from call centers and support operations, news, user groups, chats, reports, letters, surveys, white papers, marketing material, research, presentations, and Web pages

(Source – http://soquelgroup.com/wp-content/uploads/2010/01/dmreview_0203_problem.pdf)

  • Nearly 80% of enterprises have very little visibility into what’s happening across their unstructured data, let alone how to manage it.

(Source – https://www.forbes.com/sites/forbestechcouncil/2017/06/05/the-big-unstructured-data-problem/2/#5d1cf31660e0Source –)

Is there a solution to this?

In order to answer the above question, I would say data (information) in today’s world is Power, and Unstructured data is tremendous power because the essence/potential is still untapped, which when realized effectively and judiciously can turn fortunes for the organizations.

On the other hand, Organizations and business houses which are trying to extract meaning/sense out of this chaotic mess will be well-positioned to reap competitive edge and will have a competitive advantage among the peer group.

Areas to focus on addressing the problem related to unstructured data are.

  1. Raising awareness around it.
  2. Identification and location in the organization.
  3. Ensure information is searchable
  4. Make the content context and search friendly
  5. Build Intelligent content.

The good news is that we, at Magic, realized the quantum of this challenge sometime back and hence have designed a set of offerings specifically designed to solve the unstructured & semi-structured data problem for the financial services industry.

Magic FinServ focuses on 4 primary data entities that financial services regularly deals with:

Market Information – Research reports, News, Business and Financial Journals & websites providing Market Information generate massive unstructured data. Magic FinServ provides products & services to tag meta data and extracts valuable and accurate information to help our clients make timely, accurate and informed decisions.

Trade – Trading generates structured data, however, there is huge potential to optimize operations and make automated decisions. Magic FinServ has created tools, using Machine Learning & NLP, to automate several process areas, like trade reconciliations, to help improve the quality of decision making and reduce effort. We estimate that almost 33% effort can be reduced in almost every business process in this space.

Reference data – Reference data is structured and standardized, however, it tends to generate several exceptions that require proactive management. Organizations spend millions every year to run reference data operations. Magic FinServ uses Machine Learning tools to help the operations team reduce the effort in exception management, improve the quality of decision making and create a clean audit trail.

Client/Employee data – Organizations often do not realize how much client sensitive data resides on desktops & laptops. Recent regulations like GDPR make it now binding to check this menace. Most of this data is semi-structured and resides in excels, word documents & PDFs. Magic FinServ offers product & services that help organizations identify the quantum of this risk and then take remedial actions.

People are often confused with the terms – Visual analytics and Visual Representations. They many times take both words for the same meaning – presenting a set of data into some kind of graphs which looks good to the naked eye. However deep down, ask an analyst and they will tell you that visual representation and visual analytics are two different arts.

Visual Representation is used to present the analyzed data. The representations directly show the output from the analysis and are of less help to drive the decision. The decision is already known with analytics already performed on data.

On the other hand, Visual analytics is an integrated approach that combines visualization, human factors, and data analysis. Visual analytics allows human direct interaction with the tool to produce insights and transform the raw data into actionable knowledge to support decision- and policy-making. It is possible to get representations using tools, but not interactive visual analytics visualizations which are custom made. Visual Analytics capitalizes on the combined strengths of human and machine analysis (computer graphics, machine learning) to provide a tool where alone human or machine has fallen short.

The Process

The enormous amount of data comes with a lot of quality issue where data would be of different types and from various sources. In fact, the focus is now shifting from structured data towards semi-structured and unstructured data. Visual Analytics combines the visual and cognitive intelligence of human analysts, such as pattern recognition or semantic interpretation, with machine intelligence, such as data transformation or rendering, to perform analytic tasks iteratively.

The first step involves the integration and cleansing of this heterogeneous data. The second step involves the extraction of valuable data from raw data. Next comes the most important part of developing a user interface based on human knowledge to do the analysis which uses the combination of artificial intelligence as a feedback loop and helps in reaching the conclusion and eventually the decision.   

If the methods used to come to conclusion are not correct, the decisions emerging from the analysis would not be fruitful. Visual analytics takes a leap step here by providing methods/user interfaces to examine the procedures using the feedback loop.  

In general, the following paradigm is used to process the data:

Analyze First – Show the Important – Zoom, Filter and Analyze Further – Details on Demand (from:  Keim D. A, Mansmann F, Schneidewind J, Thomas J, Ziegler H: Visual analytics: Scope and challenges. Visual Data Mining: 2008, S. 82.)

Areas of Application

Visual Analytics could be used in many domains. The more prominent use could be seen in

  1. Financial Analysis
  2. Physics and Astronomy
  3. Environment and Climate Change
  4. Retail Industry
  5. Network Security
  6. Document analysis
  7. Molecular Biology

Today’s era greatest challenge is to handle the massive data collections from different sources. This data could run into thousands of terabytes or even petabytes/exabytes. Most of this data is in a semi-structured or unstructured form which makes it highly difficult for only a human to analyze or only a computer algorithm to analyze.

E.g. In the financial industry a lot of data (mostly unstructured) is generated on a daily basis and many qualitative and quantitative measures can be observed through this data. Making sense of this data is complex due to numerous sources and amount of ever-changing incoming data. Automated text analysis could be coupled with human interaction and knowledge (domain specific) to analyze this enormous amount of data and reduce the noise within the datasets. Analyzing the stock behavior based on news and the relation to world events is one of the prominent behavioral science application areas. Tracking the buy-sell mechanism of the stocks including the options trading in which the temporal context plays an important role, could provide an insight into the future trend. By combining the interaction and visual mapping of automated processed world events, the user could be supported by the system in analyzing the ever-increasing text corpus.  

Another example where visual analytics could be fruitful is the monitoring of information flow between various systems used by financial firms. These products are very specific to the domain and perform specific tasks within the organization. However, there is an input of data which is required for these products to work. This data flows between different products (either from the same vendor or different vendor) through integration files. Sometimes, it could become cumbersome for an organization to replace an old system with a new one due to these integration issues. Visual analytic tools could provide the current state of the flow and could help in detecting the changes would be required while replacing the old system with a new system. It could help in analyzing which system would be impacted most based on the volume and type of data being integrated reducing the errors and minimizing the administrative and development expenses.

Visual analytics tools and techniques create an interactive view of data that reveals the patterns within it, enabling to draw conclusions. At Magic FinServ, we deliver the intelligence and insights from the data and strengthen the decision making. Data service team from Magic would create more value for your organization by improving decision making using various innovative tools and approaches.

Magic also partners with top data solution vendors to ensure that your business gets the solution that fits your requirements, this way we rightly combine the technical expertise with business domain expertise to deliver greater value to your business. Contact us today and our team will be happy to speak with you for any queries.

Reference data is an important asset in financial firm. Due to recent crisis in global market, regularity changes and explosion of derivative and structured products, the need for valuable market & reference data has become central focus for financial institutions. For any financial transaction accurate information/data is the key element and faulty data is the major component of the operation risk.

Reference data used in financial transactions can be classified as static and dynamic

  • Static Data: Data elements which have unalterable characteristics such as financial instrument data, indexes, legal entity/ counterparty, markets and exchanges.
  • Dynamic Data: Variable data such as closing and historical prices, corporate actions.

Reference data is stored and used across front office, middle office and back office systems of the financial institutions. In a transaction life cycle, reference data is used to interact with various systems and application internally and externally. Problems related to faulty reference data continue to exist and this leads to increased operations risks and cost.

To reduce data related risk & issues and contain cost, financial institutions are looking at innovative solutions to improve data management efficiency. Centralization, standardization and automations of data management process is key to achieve this goal.

Industry Challenges

  • Poor data quality; lack of global standards; presence of data silos; multiple data sources leading to inefficiency in the whole data governance process.
  • Data duplication and redundancy across various business functions.
  • Lack of data governing policies.
  • Lack of standardized data definition.
  • Time consuming data source onboarding process.
  • Inconsistent data leading to poor reporting and management.
  • High manual intervention in data capturing and validation process.

Poor data quality is leading to

Solution

  • Deploy centralized reference data management system and create data management framework.
  • Create golden copy of the reference data received from the various sources within an organization that can be accessed by all business functions.
  • Update the data daily/real time at this single point.
  • Validate data at single place before distributing to relevant business functions.
  • Resolves data exception centrally to avoid issues at downstream systems.

Benefits

  • Improve process efficiency by centralization of data management.
  • Reduced operational and data management cost.
  • More control over data quality and change management.
  • Reduced turnaround time for new business needs and meeting new regulatory requirement.
  • Early detection and resolution of potential data issues.

Reference data is the data used to classify other data in any enterprise. Reference data is used within every enterprise application, across back-end systems through front-end applications. Reference data is commonly stored in the form of code tables or lookup tables, such as country codes, state codes, and gender codes.

Reference data in the capital market is the backbone of all financial institutions, banks and investment management companies. Reference data is stored and used in the front office, middle office, and back-office systems. A financial transaction uses the reference data when interacting with other associated systems and applications. Reference data is also used in price discovery for the financials instruments.

Reference data is primarily classified into two types –

  • Static Data– Financial instruments & their attributes, specifications, identifiers (CUSIP, ISIN, SEDOL, RIC), Symbol of exchange, Exchange or market traded on(MIC), regulatory conditions, Tax Jurisdiction, trade counterparties, various entities involved in a various financial transaction.
  • Dynamic Data– Corporate actions and event-driven changes, closing prices, business calendar data, credit rating, etc.

Market Data

Market data is price and trade-related data for a financial instrument reported by the stock exchange. Market data allows traders and investors to know the latest price and see historical trends for instruments such as equities, fixed-income products, derivatives, and currencies.

Legal Entity data

The 2008 market crisis exposed severe gaps in measuring market credit and market risk. Financial institutions are facing a hard challenge to identify the complex corporate structure of the security issuer and other counterparties & entities involved in their business. Institutions must have the ability to roll up, assess, and disclose the aggregate exposure to all the entities across all asset classes and transactions. Legal Entity is the key block of this data which will help the Financial institution to know all the parties with whom they are dealing with and help to manage the risk.

The Regulation rules like The Foreign Account Tax Compliance Act (FATCA), MiFID II will require absolute clear identification of all the entities associated with the security. LEI plays a vital role to perform such due diligence.

EDM workflow

  • Data Acquisition – Data is acquired from leading data providers like Bloomberg, Reuters, IDC, Standards & Poors, etc.
  • Data Processing –Data normalization & transformation rules are applied & validation processes clean the data.
  • Golden Copy creation – Cleaned & validated data is transformed into more trusted Golden Copy data through further processing.
  • Data Maintenance –  Manual intervention if necessary to handle the exceptions that cannot be handled automatically.
  • Distribution/Publishing – Golden Copy data is published to the consumer application like Asset Management, Portfolio Management, Wealth Management, Compliance, Risk & Regulatory applications, other Business Intelligence platform for Analytics.

Importance of efficient EDM system

The fast-changing regulatory & business requirements of the financial industry, poor quality of data, competition demand a high-quality centralized data management system across the firm.

In current market situation, companies must be able to quickly process customer requests, execute trading requests quickly, identify holdings and positions, assess and adjust risk levels, maximize operational efficiency and control, and optimize cost all while implementing regulatory and compliance needs in a timely fashion.

An efficient EDM system enables the business to –

  • Establish a Centralized database management system
  • Reduced manual work
  • Decreased operational risk
  • Lower data sourcing costs
  • Having a better view of data
  • Governance & auditing needs
  • Better overview of risk management
  • Tailor-made user rights
  • Analytics & data-driven decision

Challenges need to overcome

  • Data quality & data accuracy.
  • Siloed data and disparate data across firms making it difficult to have a consolidated view of the risk exposure.
  • Data lineage.
  • Keeping the cost lower in such a fast-changing financial market.
  • Ability to quickly process customer requests, accurately price holdings, assess and adjust risk levels accordingly.
  • The complexity of the latest national and international regulations.

Corporate actions industry is making great strides towards automation. However, despite all the technology advancements a significant portion of the process of managing corporate actions data requires manual processing mainly due to the increasing complexity of corporate actions thanks to cross border trading made easier and local market nuances.

Another big reason why the Corporate Action industry has not achieved such a significant degree of automation lies in Corporate Actions as a back-office process which is normally seen as cost management not as revenue generator which hinders the securities firm to invest too much.

Corporate Actions processing could be divided into 3 parts:

  1. Capture of Corporate Action data
  2. Processing of Corporate Action data
  3. Dissemination of tailored Corporate Action data

Each of the 3 parts has its own challenge in its way. Capturing the data is the first step in the process where we are actually working for a “Golden Copy”. A golden copy of data is selecting the best possible value from variety of source. Generating a golden copy provides the first headache to securities firm. The data from issuers are normally transmitted in the form of press releases, prospectuses, and other free text format files e.g. PDF, HTML etc. The challenges for the securities firm lies in the translating these unstructured data into information and transmitting them to various stakeholders using the standards. These various stakeholders are none other than financial industry participants – custodians, sub custodians, brokers, prime brokers etc. Their primary aim is to capture the data from various sources and produce a golden copy for the investors. This golden copy is disseminated to various investors/intermediaries depending on the need e.g. an asset/investment manager could need the information as soon as possible to enable him to decide the investment strategy whereas a portfolio manager would require it to adjust the NAV end of day.

The information that is sent to various investors does not only include golden copy data or event data, it also includes data of their holdings and entitlements from the corporate actions. This information brings in an interpretation risk where the various stakeholders does not only depend on the custodian feeds but they rely on the local feeds which are more efficient in way of presenting the data which could not be standardized in global standards e.g. tax data. Failure to interpret corporate action information correctly may lead to suboptimal trading decisions by brokerage and fund management firms for clients or for proprietary positions.

The first and foremost challenge as explained above in Corporate Action processing lies in the capture of event announcement and creation of golden copy.  However, it is only the first step in a lifecycle of a Corporate Action. The more complex events which include various voluntary events e.g. tender offer, merger, rights offer, exchange offers etc. requires a lot of instructions/elections to be delivered for the event. This upward chain of communication is very complex where the elections are delivered in non-standard format via emails, phone and brings in a lot of risks. The more intermediaries in chain, the tighter would be deadline to respond back as each intermediary would set up its own deadline to process the election. The effect of corporate actions on share prices and trading activity is generally seen on important dates e.g. announcement date, ex-date, record date etc. Hence, the decision from an investor could change several times and the securities firm could receive multiple elections on the same positions. The other critical factor in election processing is the current holdings of the investor which needs to be up to date as the time of election or the processing could of election on wrong holdings could have adverse effects. The wrong holdings could be result of trading or lending activities which have not been updated in the books. Frequent reconciliation of holdings is a significant step to reduce this risk.

Capturing the data, creation of golden copy, distributing the data to different intermediaries and investors, processing of instructions for complex event does provide a lot of challenge however, the final frontier is still to be conquered where the payments of the corporate actions to be made and accounting has to be done.

Mandatory corporate actions such as dividend and interest payments, are straightforward, in that they only require a transfer of money from the bank account of the issuer to the bank account of the intermediaries and then to investor. For income from cross-border security holdings, the payment may operate less smoothly, and a delay may occur between the payment date and the time at which the cash reaches the beneficiary’s account.

For complex events which involve processing of shares, the process becomes more complex with fractions coming into picture. Sometimes, these fractions are paid as cash in lieu other times they need to be ignored. Addition/Ignoring of these fractions at the intermediary level could ultimately lead to different consolidated entitlement at its agent level. E.g. at an intermediary level, the consolidated holding is 300 shares with 3 investors each having 100 shares. In case the distribution ration of share is 1:3 where one share will be provided for every 3 shares, the consolidated position of intermediary entitled it for the benefit of 100 shares ((100+100+100)/3). However for each investor it resulted in 33.33 shares. The handling of fractions in such a case could have different implications all together

  1. Providing cash in lieu ⇒ Intermediary does not get any cash because of rounded holdings hence it has to sell the extra share and distribute cash to each investor.
  2. Rounding down/up ⇒ Intermediary in this example gets an extra share/less share depending on the holdings.

Other important aspect in payments / entitlements of Corporate Actions is taxation. An intermediary normally depends on local sources for tax information. Globalization of financial industry has provided an exponential rise in cross border trading activities. This means the more investors are impacted by the corporate action on a security. Taxation for an investor depends on its residency status and thus have the impact on the entitlements / payment of corporate actions.

Taxation on corporate actions is normally seen as a value added service and not all custodian are the tax agents for their investors. Taxation on corporate actions brings in lot of complexity in terms of:

  1. Types of taxation e.g. withholding tax etc.
  2. Part of entitlements on which tax needs to paid. Sometimes it could be cases that investor does not need to pay tax on complete or full entitlements e.g. Church tax in Germany, unfranked dividends in Australia etc.
  3. Residency of investors. Local investors are sometimes exempt from taxes but not the foreign investors.
  4. Tax credits where a part of tax is given back to investor.
  5. Double taxation treaty where the reclaims are made by investor as a part of double taxation treaty between the two countries.

Apart from calculation of tax, notification of these tax details in standard form is still a frontier unexplored for the organizations. Each intermediary tries to collate this information in their own and then send to the investors which have their own methods to interpret these messages.

By automating the various corporate actions functions, organizations can ensure long-term operational efficiency and effectiveness.

Corporate Actions and Client Servicing:

Each financial organization is looking for a new way to lure clients by providing various personalized services. These now include the range from corporate actions which the organization process. Timely, high-quality corporate actions information in the front office enables better-informed trading and investment analysis and decision-making; it helps support global investment strategies, reduces interpretation errors and benefits the monitoring of accounts and positions.

Finally, in a world where FinTech and automation are at the realm of every organization, in the near future we may witness a significant change in the way Corporate Actions are processed.

What is a Trading System?

A “trading system” creates a set of trading strategies which are applied to the given input data to generate entry and exit signals (buy/sell) in a trading platform.The traders/professionals who create the trading strategies to maximize the profit are called “Quants”. They use exhaustive quantitative research & analysis to build such efficient strategies by applying advanced statistical and mathematical models.

Algorithmic trading – Algorithmic trading uses various algorithms to create a trading strategy from trading ideas. The algorithms are backtested with historical data and then used with real market data to give the best return. The execution can be done manually or automated.

Quantitative trading – Advanced mathematical and statistical models are used in Quantitative trading creation and execution of trading strategies.

Automated trading – Automated trading involves automated order generation, submission, and the order execution process. However, they are not fully automated. Manual interventions are also required

HFT (high-frequency) trading – Trading strategies can be classified into low-frequency, medium-frequency and high-frequency strategies as per the holding time of the trades. High-Frequency Trading strategy holds the trading position for a fraction of a second time and executes the trading strategy automatically. Millions of trades are an executed per day in this model.

The most of the algo-trading is high-frequency trading (HFT), which attempts to capitalize on placing a large number of orders at very fast speeds across multiple markets and multiple decision parameters, based on pre-programmed instructions.

The other name of Algo Trading is black box trading.

The profit opportunities are higher in algo trading and it makes markets more liquid and makes trading more systematic by ruling out emotional human impacts on trading activities via sentiment analysis.

Algorithmic Trading Strategies

  • Momentum/Trend Following:
    Calculate 50 days SMA (Simple Moving Average)
    Calculate 200 days SMA
    Take a long position when the 50 days SMA is larger than or equal to 200 days SMA
    Take a short position when the 50 days SMA is smaller than 200 day SMA. This is one of the most common algorithmic trading strategies. This follows trends in moving averages, channel breakouts, price level movements and related technical indicators. Algo Trader assumes there is a trend in the market and use the statistics to determine if the trend will continue. It does not involve making any predictions or price forecasts. Trades are initiated based on the occurrence of desirable trends. The above-mentioned example of 50 and a 200-day moving average is a popular trend following strategy.
  • Arbitrage Opportunities:
    Buying a dual listed stock at a lower price in one market and simultaneously selling it at a higher price in another market offers the price differential as risk-free profit or arbitrage. The same operation can be replicated for stocks versus futures instruments, as price differentials do exists from time to time. Implementing an algorithm to identify such price differentials and placing the orders allows profitable opportunities in an efficient manner. Also, trading can be triggered by the acquisition of the issuer company. This is called corporate event. Such event driven strategy is applied when the trader is planning to invest based on the pricing inefficiencies that may happen during a corporate event (before or after). Bankruptcy, acquisition, merger, spin-offs etc could be the event that drives such kind of an investment strategy. These strategies can be market neutral and used by hedge fund and proprietary traders widely. Index Fund Rebalancing: Index fund has defined periods of rebalancing to bring their holdings to par with their respective benchmark indices. This creates profitable opportunities for algorithmic traders, who capitalize on expected trades that offer 20-80 basis points profits depending upon the number of stocks in the index fund, just prior to index fund rebalancing. Such trades are initiated via algorithmic trading systems for timely execution and best prices.
  • Machine Learning based
    The major aspect of ML is learning from past data and predict the outcome of an unseen or new situation. The human learns in the same fashion however machine can process a huge volume of data much faster than human and predict the outcome. This is the way trading system works. Traders handle a large volume of historical data, analyze them and predict the stock price to establish a various trading strategy. Hence machine learning has become one of the key elements in Algo Trading system. There are many types of ML techniques depending on the nature of target prediction: Regression, Classification, Clustering, Association. The other set of categorization is Supervised (Target prediction is known to the model) vs Un-Supervised (Target prediction is unknown to the model) techniques. Python is a powerful language which supports statistical computations and can work with ML algorithms easily. R is another powerful language for statistical analysis.
  • Mathematical Model Based Strategies:( source Investopedia)
    A lot of proven mathematical models, like the delta-neutral trading strategy, which allows trading on a combination of options and its underlying security, where trades are placed to offset positive and negative deltas so that the portfolio delta is maintained at zero.
    • Trading Range (Mean Reversion):
      Mean reversion strategy is based on the idea that the high and low prices of an asset are a temporary phenomenon that reverts to their mean value periodically. Identifying and defining a price range and implementing an algorithm based on that allows trades to be placed automatically when the price of asset breaks in and out of its defined range.
    • Volume-Weighted Average Price (VWAP):
      The volume weighted average price strategy breaks up a large order and releases dynamically determined smaller chunks of the order to the market using stock specific historical volume profiles. The aim is to execute the order close to the Volume Weighted Average Price (VWAP), thereby benefiting on average price.
    • Time Weighted Average Price (TWAP):
      Time-weighted average price strategy breaks up a large order and releases dynamically determined smaller chunks of the order to the market using evenly divided time slots between a start and end time. The aim is to execute the order close to the average price between the start and end times, thereby minimizing market impact.
    • Percentage of Volume (POV):
      Until the trade order is fully filled, this algorithm continues sending partial orders, according to the defined participation ratio and according to the volume traded in the markets. The related “steps strategy” sends orders at a user-defined percentage of market volumes and increases or decreases this participation rate when the stock price reaches user-defined levels.
    • Implementation Shortfall:
      The implementation shortfall strategy aims at minimizing the execution cost of an order by trading off the real-time market, thereby saving on the cost of the order and benefiting from the opportunity cost of delayed execution. The strategy will increase the targeted participation rate when the stock price moves favorably and decrease it when the stock price moves adversely.

Benefits of Algorithmic Trading

  • Trades are executed timely and instantly to get benefit from best possible price change
  • Reduced risk of manual errors in placing the trades order and achieved higher performance
  • Reduced transaction costs
  • Take the benefit of multiple market conditions
  • Backtest the algorithm, based on available historical and real-time data
  • Reduced possibility of human error based on emotional and psychological factors of traders

Algo-trading is used in many forms of trading and investment activities, including:

  • Mid to long term investors or buy side firms (pension funds, mutual funds, insurance companies) who purchase in stocks in large quantities but do not want to influence stocks prices with discrete, large-volume investments.
  • Short term traders and sell side participants (market makers, speculators, and arbitrageurs) benefit from automated trade execution; in addition, algo-trading aids in creating sufficient liquidity for sellers in the market.
  • Systematic traders (trend followers, pairs traders, hedge funds, etc.) find it much more efficient to program their trading rules and let the program trade automatically.
  • Algorithmic trading provides a more systematic approach to active trading than methods based on a human trader’s intuition or instinct.

Get Insights Straight Into Your Inbox!

    CATEGORY