Blockchain

Optimizing Performance with End-to-End Services

Optimizing blockchain performance and enhancing the user experience with end-to-end services so that customers focus on their core abilities while scaling to the next level of growth at an optimal cost. Drive transformation by continually leaning on AI to enhance various aspects of blockchain-based systems and applications addressing issues related to compliance, regulations, security and privacy, regulation, etc.

Enterprise-level distributed/decentralized applications have become an integral part of any organization today and are being designed and developed to be fault-tolerant to ensure availability and operability.  However, despite the time and efforts invested in creating a fault-tolerant application,  no one can be 100 %  sure that the application will be able to bounce back with the nimbleness desired in the event of failures. As the nature of failure can differ each time, developers have to design, considering all kinds of anticipated failures/scenarios. From a broader perspective, the failures can any of the four types  mentioned below: 

  1. Failure Type1: Network Level Failures
  2. Failure Type2: Infrastructure (System or Hardware Level) Failures
  3. Failure Type3: Application Level Failure
  4. Failure Type4: Component Level Failures 

Resiliency Testing – Defining the 3-step process: 

Resiliency testing is critical for ensuring that applications perform as desired in real-life environments. Testing an application’s resiliency is also essential for ensuring quick recovery in the event of unforeseen challenges arising.      

Here, the developer’s need is to develop a robust application that can rebound with agility for all probable failures. Due to the complex nature of the application, there are still unseen failures that keep coming up in production. Therefore, it has become paramount for testers to continually verify the developed logic to define the system’s resiliency for all such real-time failures. 

Possible ways for testers to emulate real-time failures and check how resilient an application is against such failures

Resiliency testing is the methodology that helps to mimic/emulate various kinds of failures, as defined earlier. The developer/tester determines a generic process for each failure identified earlier before defining a strategy for resiliency testing for distributed and decentralized applications. 

Based on our experience with multiple customer engagements for resiliency testing, the following  3-Step process must be followed before defining a resiliency strategy.

  1. Step-1: Identification of all components/services/any sort of third-party library or tool or utility. 
  2. Step-2: Identification of intended functionality for each components/ services/ library/ tool/ utility.
  3. Step-3: Build an upstream and downstream interface and expected result to function and integration as per acceptance criteria.

As per the defined process, the tester has to collect all functional/non-functional requirements and acceptance criteria for all the four failure types mentioned earlier. Once all information gets collected, it should be mapped with the 3-step process to lay down what is to be verified for each component/service. After mapping for each failure using the 3-Step process, we are ready to define a testing strategy and automate the same to achieve accuracy while reducing execution time. 

We elicited the four ways to define distributed/decentralized networks for the testing environment in our previous blog. This blog explains the advantages/disadvantages of each approach to set up applications in a test environment. It also describes why we prefer to first test such an application with containerized application followed by the cloud environment over virtual machines and then a physical device-based setup. 

To know more about our Blockchain Testing solutions, read here

Three modes of Resiliency testing 

Each mode needs to be executed with controlled and uncontrolled wait times. 

Mode1: Controlled execution for forcefully restarting components/services

Execution of “components restart” can be sequenced with a defined expected outcome. Generally, we flow successful and failed transactions followed by ensuring reflection of transactions on the system from overall system behavior. If possible, then we can assert the individual components/services response for the flowed transaction based on the intended functionality of restarted component/service. This kind of execution can be done with: 

  • The defined fixed wait time duration for restarting
  • Randomly selecting the wait time interval.
Mode2: Uncontrolled execution (randomization for choosing component/service) for forcefully restarting components/services

Execution of a component restart can be selected randomly with a defined outcome. Generally, we flow successful and failed transactions, followed by ensuring reflection of the transaction on the system from overall system behavior. If possible, then we can assert the individual components/services response for the flowed transaction based on the intended functionality of restarted component/service. This kind of execution can be done with: 

  • The defined fixed wait time duration for restarting
  • Randomly selecting a wait time interval.
Mode3: Uncontrolled execution (randomization for choosing multiple components/services) for forcefully restarting components/services

Though this kind of test is more realistic to be performed, it has a lot of complexity based on how the components/services are designed. If the number of components/services is too many, then the combination of test scenarios will increase exponentially. So the tester should create the test with the assistance of system/application architecture to make the group of components/services represent the entity within the system. Then Mode1 & Mode2 can be executed for those groups. 

Types of Failures

Network Level Failures

As distributed/decentralized application uses peer-to-peer networking to establish a connection among the nodes,  we need to get specific component/service detail on how it can be restarted. We also need to know how to verify the behavior during downtime and restarting the same. Let’s assume the system has one container within each node that is responsible for setting up communication with other available nodes; then the following verification can be performed – 

  1. During downtime, other nodes are not able to communicate with the down node.
  2. No cascading effect of the down node occurs to the rest of the nodes within the network.
  3. After restart and initialization of restarted component/service, other nodes can establish communication with the down node, and the down node can also process the transaction.
  4. The down node can also interact with other nodes within the system and route the transaction as expected.
  5. Data consistency can be verified.
  6. Thestem’s latency can also be captured before/after the restart to ensure performance degradation is introduced to the system.
Infrastructure (System or Hardware Level) Failures

As the entire network is being run through containerized techniques so to emulate infrastructure failure, we can use multiple strategies like: 

  1. By taking containerized application down or if Docker is being used, then taking docker daemon process down.
  2. By imposing a resource limit for memory, CPUs, etc., so low at the container level that it quickly gets exhausted with a mild load on the system.
  3. Overload the system with a high number of transactions with various sizes of data generated by the transaction.

We can verify if the system as a whole is meeting all functional and non-functional requirements with each failure described above.

Application Level Failure

As a distributed application uses a lot of containers to run the application, so we only target to stop and start a specific container having the application logic. The critical aspect for restarting application containers is the timing of stopping and starting the container to keep track of transaction processing. Three time-dependent stages for an application related to container stop and start are:

  1. Stage1: Stop the container before sending a transaction.
  2. Stage2: Stop the container after sending a transaction with different time intervals, e.g., stopping the container immediately, after 1000 milliseconds, 10 seconds, etc.
  3. Stage3: Stop the container when a transaction is in a processing stage.

System behavior can be captured and asserted against functional and non-functional acceptance criteria for all the above three stages.

Component Level Failures

The tester should verify the remaining containers for all three modes with three different stages with respect to time. We can create as many scenarios for these containers depending upon the following factors :

  1. The dependency of remaining containers on other critical containers.
  2. Intended functionality of the container and frequency of usage of those containers in most frequently used transactions.
  3. Stop and start for various time intervals (include all three stages to have more scenarios to target any fragile situation).
  4. Most fragile or unstable or mostly reported error within remaining containers.

By following the above-defined strategy for resiliency, the tester should always reconcile the application under test to check whether any areas are still left to be covered or not. If there is any component/service/third-party module or tool or utility that is untouched, then we can design scenarios by combining the following factors: 

  1. Testing modes
  2. Time interval stages 
  3. Execution mode, e.g., sequential and randomization of restarts
  4. Grouping of containers for stopping and restarting

Based on our defined approach followed by the implementation for multiple customers, we have prevented almost 60-70% of real-time issues related to resiliency. We also keep revising and upgrading our approach based on new experiences with new types of complicated distributed or decentralized applications and new failures so we can increase the prevention of real-time issues at a comprehensive level. To explore resiliency testing for your decentralized applications, please write to us at mail@magicfinserv.com.

This current blog is part three in the series of blogs on DLT infrastructure testing. 

While in the first blog, we covered all aspects of infrastructure testing for decentralized applications built on the blockchain or distributed ledger platforms and the Magic FinServ approach. In the second blog, we have addressed why customers must make infrastructure testing an integral part of the QA process. 

In this third blog of the series, we address another issue of critical importance – automation. Automation is an essential requirement in any organization today when disruptive forces are sweeping across domains. And as a McKinsey report indicates – “Automation can transform testing and quality control because the increased capacity it provides allows a company to move from spot checks to 100 percent quality control, which reduces the error rate to nearly zero.” 

Infrastructure testing -A critical requirement

While the importance of infrastructure testing cannot be denied, four attributes make it extremely complicated from the tester’s perspective. These are peer-to-peer networking(P2P), consensus algorithms, role-based nodes along with the permission for each node (only for private networks), and lastly, state and transactional data consistency under high load along with resiliency of nodes. 

To know more about these in detail,  you can check the links provided below, which lead to the first and second in the series of blogs:       

Infrastructure Testing for Decentralized Applications built on Blockchain or Distributed Ledger Platform

Why is Infrastructure Testing important for Decentralized Applications built on any Blockchain or DLT

From these blogs, it becomes evident that though infrastructure testing is an essential requirement for any decentralized application, it is also a time-consuming task. Most of the supported features for such applications require different configurations/arrangements of nodes meaning different network topologies for each feature. There is a high possibility that one feature may be tested with some number of nodes. However, for a proper test fix or enhancement of any sort, a different number of nodes from what was designed earlier is needed. 

Developing a comprehensive test strategy

As far as test strategies are concerned, most often deployed one utilizes docker-based containers to copy different network topologies with minimal changes. However, defining docker-based containers ( a.k.a. docker service) with different numbers is also a highly time-consuming activity. The addition of a single new container, depending upon the number of nodes, usually takes a couple of hours to set up Docker-based containers to create different network topologies. It is not only tedious but too complicated. 

One also must take into account the cloud. Most organizations now require infrastructure-testing to be carried out on cloud platforms to mimic the closure environment that they would be using in real-time, as closely as possible. However, setting up one node on any existing cloud service could easily take two to three hours, even with automated ways to spin off machines. Therefore to ensure quicker results, the option at hand is automation.

Automating the untested – how to get started

Today almost every organization/enterprise uses Agile methodology for product development and an automated way (with CI/CD) to create builds daily. Functional testing can be automated and integrated within CI/CD easily, but it is not so with non-functional testing like Infrastructure, Performance, Security, Resiliency, Load testing, etc. These are not easily integrated with CI/CD. Even if these are integrated within CI/CD, non-functional testing does not provide the kind of results organizations desire. 

When it comes to the question of manual  non-functional testing, it is rather tedious. Since there are frequent builds that have to be tested (for non-functional areas like infrastructure), manually setting up a different network topology is not viable. It takes a lot of time and is highly error-prone.  Non-functional testing of Blockchain (other than infrastructure) relies on node level rather than the network level; therefore for tests related to Performance, Security, Resiliency (all of which come under non-functional testing) are performed on standard network topologies. Thereby indicates that infrastructure testing directly relates to network topologies, whereas other non-functional testing processes mentioned earlier are impacted on a case-to-case basis.

For infrastructure testing, organizations must carry out the following activities to define the network topology:

  • Impact analysis of all changes related to the four significant factors listed earlier
  • If any of the four factors is impacted, then defining network topologies for each scenario
  • Set up of nodes for all probable network topologies
  • Creation of network for each network topology
  • Execution of functional/non-functional testing on each network topology to ensure that all network topologies are working as per the acceptance criteria

Impact analysis of changes 

To define the required numbers of network topology, organizations must first identify what all changes are to be done and how those changes would be impacted by peer-to-peer (P2P) networking logic, consensus algorithms logic, permissioning handler logic, or data/transaction consistency logic. If the impact is apparent, then organizations must define network topology. This activity is the most time-consuming task of all as one has to understand all the changes. 

Another critical task for organizations is to perform impact analysis for all changes and find out whether the four major factors have been impacted or not. The easiest way to process this task would be to get developers to register this information with meaningful keywords that can automate impact analysis. With proper automation in place, organizations can use impact analysis to determine whether existing network topologies can be used or a new one has to be created.

Defining network topologies: 

Once impact analysis is done and it is decided that new network topologies must be created to account for changes, then the next requirement is to define all network topologies.

For instance, if an organization reports an issue related to the functioning of nodes. Whenever there is an even number of consensus nodes within the network, then consensus seems to get stuck or takes longer than usual. To resolve the problem, developers work out the logic. In case the network does not have an even number of consensus nodes, then the need is to either convert one existing node to a consensus node or add the new one to the network. Either way, the network topology will be changed from one that exists. 

With proper automation in place, it is possible to keep the registry of all existing QA network topologies. Once the required network topology is fed in, it should provide data with pertinent information whether a new node is to be created or any existing network can be utilized after modifying a  number of nodes. Manually performing this task could take hours and sometimes even days if the organization has a long list of network topologies in their QA environments.

Setting up nodes for required network topology: 

There are two possibilities here, either modify the existing node or create a new node to have a new network topology. In either case, nodes will have to be set up manually. Again this will take time and require the engagement of someone who understands QA environments from an administrative perspective to set up nodes. Hence, it will increase the time taken and create dependency on new groups to coordinate for node setup without automation.

Creation of Network Topology: 

After setting up the required number of nodes, a network is created based on the network initialization process. If multiple network topologies have to be tested with several scenarios, then for each network topology, the following activities have to be performed:

  • Cleaning all involved nodes if existing network nodes is used
  • Initialization of network
  • Allowing for stabilization of the nodes for all components/services
  • Execution of functional scenarios
  • Destroying the network to free the nodes

As all the above activities are required to be completed for each network topology so without automation, this will consume a lot of time and make testing highly error-prone. Most of the time, network topologies use nodes that have overlap with other network topologies; hence missing any of the activities underlined earlier will result in inconsistency on the other running network. Experience suggests that cleaning the nodes is a highly error-prone activity within a shared environment of various network topologies. It becomes tough to discriminate why the errors are ensuing. Whether those are actual bugs to be reported or some nodes are now being used for two or more networks responsible for the error since clean up (of nodes) has not been done correctly. Without proper automation, all these activities will take significant time and raise a false alarm for the issues that have popped up due to some human error.

Execution of functional and non-functional tests: 

Functional tests must be executed without fail, whereas non-functional tests are always subject to the changes being made. These (non-functional tests) become essential if there is performance improvement or fix required for any security vulnerability. Even in case of any exceptional fix that hurts performance, this is required. 

Functional tests are implicitly covered in the network creation phase, and almost every organization focuses (or gives priority) on automating functional testing. Non-functional testing has always been the least priority for most; however, this becomes very tedious if required to be performed on multiple network topologies. It is rare to run non-functional testing for all network topologies as it has the very least dependency on different network topologies. Most of the time, non-functional testing is at node level rather than dependent upon different network topologies. 

Conclusion

In its Hype Cycle report for Blockchain Business for 2019, Gartner predicts that within five to 10 years, Blockchain will have a transformational impact across industries. According to David Furlonger, distinguished research vice-president at Gartner, permissioned ledgers in several key areas in banking and investment services will witness increased focus. In light of the uptick in interest from banking and investment services CIOs seeking to improve decades-old operations and processes, automation is desirable for driving ROI and efficiency in Blockchain incorporation and its automation.

Automated testing enables the developers to easily and quickly check new apps and updates for errors, defects, and other weaknesses. Infrastructure testing is one such area that organizations must automate as soon as possible if they desire to build robust decentralized applications. 

Magic FinServ’s automated test methodology is unique, and we have the relevant expertise to drive automation for testing Blockchain Infrastructure. We have had success with several clients who built financial products on blockchain platforms. 

To explore automated testing for blockchain infrastructure, write to us at mail@magicfinserv.com 

According to a recent forecast by Gartner, “by 2025, the business value added by blockchain will grow to slightly more than $176 billion, then surge to exceed $3.1 trillion by 2030.” Right from the voting process to the transfer of data for mission-critical projects, blockchain-based technology would be an integral part of the social, economic, and political setup the world over. 

There are many exciting components/features that make it possible for blockchain platforms to provide a secure decentralized architecture for activities ranging from processing transactions to storing data that is immutable. We have briefly discussed these in our earlier blog. The blog identified how various services and components make infrastructure testing a matter of utmost significance/consequence- while at the same time testing the developer’s or application team’s core competence.

Considering its immense impact in the days to come in all aspects of human life, it has become essential that clients investing in blockchain ensure that the nature of transactions is inviolable. 

To ensure this inviolability, the infrastructure of the blockchain must work seamlessly. Hence the need for infrastructure-testing of blockchain to verify if all the constituent elements are operating as desired. 

What comprises infrastructure testing for Blockchain/Distributed ledger platform 

In simple terms, infrastructure-testing of blockchain networks translates into verifying whether the end-to-end blockchain core network and its constituent elements are operating as desired. It is critical as it determines the reliability of a product, which depends entirely on nodes spread across the globe.

When it comes to decentralized applications built on the blockchain or distributed ledger platforms due to the nature of operations where each constituent element is highly reliant or linked with the other, any shortcoming or failure could jeopardize operations. Hence, to ensure continuity, reliability, and stability in services, infrastructure testing should be carried out with high focus.      

Defining the constituent elements of a Blockchain
  • All distributed ledger platforms, including blockchain, have a dedicated service responsible for establishing communication between the nodes utilizing peer-to-peer networking or any other networking algorithm. 
  • There is also a component or service that makes the network of such applications fault-tolerant using consensus algorithms. 
  • Another critical aspect of blockchain platforms is making consensus on the state and transactional data to process, followed by persisting of the manipulated data. 
  • When it comes to private networks, also known as consortium networks, there are many ways to achieve permission for each node to provide a secure and isolated medium among the participants. 

For confirming production usage for application builds over these platforms, infrastructure testing has similar importance as any other supported functionality. Without verifying functionality, none of the applications can be deployed to production. Similarly, decentralized applications built over various platforms can be deployed to production only after the reliability of infrastructure has been verified with all nodes’ probable numbers. 

What makes the entire exercise demanding are the following factors: 

  • Peer- to- Peer networking(P2P) 
  • Consensus algorithms
  • Role-based nodes along with permission for each node (meant only for private networks)
  • State and transactional data consistency under high loads along with resilience test of nodes

Another vital characteristic to be considered is the number of nodes itself.  Considering that such applications’ functionalities are dependent on the number of nodes, this is a key requirement. The number of nodes can vary depending upon:

  • Which service or component is to be tested 
  • How all the factors mentioned above impact the service or component
Importance of testing various components of Blockchain Infrastructure
Reliability testing

Reliability of infrastructure by far is the most challenging phase for any blockchain developer or application team. Here, confirming whether an application can run on targeted infrastructure or not is explored. Defining application reliability for multiple machines (a.k.a. nodes or servers or participants etc.) increases complexity exponentially due to the permutation and combinations of failures. 

Hence, wherever multiple machines are involved, it is the natural course of action for developers and application teams to measure application reliability on the infrastructure on which such applications will run. All the factors enlisted earlier attest that infrastructure testing is of prime consequence for decentralized applications built on all available platforms. 

Peer-to-Peer networking

If there is any flaw in peer-to-peer networking, then nodes will not communicate with each other. If nodes cannot establish connections with each other, then nodes will not be able to process the transaction with the same state. If nodes are not in the same state, then there will not be any new data manipulated and created to persist. In the case of blockchain, there will not be any new blocks. For the distributed ledger, there will not be any new data appended to the ledger. This may lead to chain forking or a messy state of data across the nodes that will eventually result in the network reaching a dead-end or getting stuck. 

Improper peer-to-peer network implementation can also lead to data exposure to unintended nodes that do not have permission to see the data. To overcome this risk of unintentional data exposure, proper testing must be performed. That will ensure that the expected number of participants and expected numbers of new participants can participate within the network as appropriate communication is established between nodes based on each node’s role and permissions. 

Consensus algorithms: 

Consensus algorithms have two critical functions: 

  • Drive consensus by ensuring that a majority of nodes are processing new data with the same state
  • Provide fault tolerance for network

Consensus algorithms must be verified with all possible types of nodes and all probable permissions that can be defined for each node. To verify the consensus algorithms, multiple network topologies are needed. Improper verification will result in the network getting stuck. It would also result in sharing of data with nodes that were not supposed to get the data. 

Any flaw in consensus will result in a “stuck network” and cause the forking of data. Worse, data can be manipulated with fraudulent nodes. Depending upon which consensus algorithm is used, the network topology can be created and verified with all expected features claimed to be working. 

Role-based nodes, along with their permission 

Each platform supports different roles for each node to ensure that nodes get only the intended information based on the defined permissions. Depending upon the different kinds of roles and their respective permissions, various network topologies are created to perform all required verifications. In case there is any missing verification, sensitive data is exposed to unintended nodes. The way data is shared between nodes is governed by consensus algorithms based on defined permission. 

Any flaw in the permission control mechanism can lead to sensitive data leakage. Data leakage is catastrophic, more so for private networks. The importance of accuracy cannot be emphasized more in this case, and it can only be achieved by ensuring a proper testing mechanism is being utilized.

State and transactional data consistency 

As there can be any number of nodes in real-time, it is highly critical to verify that each node has the same state and transactional data. All complicated transactions must be performed with an adequately defined load to ensure that all nodes have the same state and transactional data. 

Resiliency-based verification must be performed so that all nodes can get to the same state and transactional data, even when a fault is intentionally introduced to randomly selected nodes with a running network. 

Conclusion

To conclude, infrastructure testing should not be substituted with any traditional functional testing process. Furthermore, as this is a niche area, infrastructure testing must be entrusted to a partner with industry-wide experience and capable resources having a sound understanding of all the factors underlined above. A real-time experience in establishing testing processes for such platforms is a highly desired prerequisite.  Without infrastructure testing, it is perilous to launch a product in the market. 

Magic FinServ has delivered multiple frameworks designed for all the above factors. With an in-depth knowledge of multiple blockchain platforms, we are in an enviable position to provide exactly what the client needs while ensuring the highest level of accuracy and running all frameworks following industry standards and timelines. As each customer has their own specific way of developing such platforms and choosing different algorithms for each factor, choosing an experienced team is undoubtedly the best option to establish an infrastructure testing process and automate end-to-end infrastructure testing.

To explore infrastructure testing for your Blockchain/DLT applications, write to us at mail@magicfinserv.com

Background: Ethereum is the first programmable blockchain platform provided to the developer community to build business logic in the form of a Smart Contract that eventually helps developers build decentralized applications for any business use case. Once Smart Contracts are developed, they need to be registered on the blockchain, followed by deploying to the blockchain. After deploying the contract, the contract address gets assigned through which contract methods can be executed by the abstract layer built over ABI. Web3 is the module which is the most popular implementation to interact with local or remote node participating in the underlying blockchain network, built over Ethereum.

Define Decentralized Application Architecture for Testing: Needless to say that testing of any Decentralized application built over the blockchain platform is not only highly complex but also requires a specialized skill set with the most analytical mind of white box testers. At Magic FinServ, we possess rich real-time experience of some of the most complicated concepts of testing Blockchain-based decentralized applications. Based on this experience, our strategy divides Blockchain-based decentralized application into three isolated layers from a testing perspective –

1. Lowest-Layer – Blockchain platform to provide a platform on which smart contracts can be executed. 

a. Ethereum Virtual Machine

b. Encryption (Hashing & Digital Signature by using cryptography)

c. P2P Networking

d. Consensus Algorithm

e.  Blockchain Data & State of the network (Key-Value storage)

2.  Middle-Layer – Business Layer (Smart Contract) to build business logic for business use cases

a. Smart Contract development – Smart Contract Compilation, Deployment & Execution in Test Network

b. Smart Contract Audit

3. Upper-Layer – API Layer for Contracts, Nodes, Blocks, Messages, Accounts, Key Management & Miscellaneous endpoints to provide an interface to execute business logic and get updates on the state of the network at any given point in time. These interfaces can be embedded between upstream & downstream as well.

Based on these defined components of blockchain, we build an encompassing generic testing strategy for each layer in 2 broad categories –

1. Functional: As the category name suggests, this category ensures that all components that belong to each layer should function as per defined acceptance criteria by the business user/technical analyst/business analyst. We prefer to include System/Integration testing under this category to ensure that all components within each layer work as defined, but also as a complete system, it should accomplish the total business use case. 

2. Non-Functional: This category covers all kinds of testing other than functional testing like Performance, Security, Usability, Volatility & Resiliency testing not only at Node level but container level as well if Docker is being used to host any service.

In defining the generic testing strategy for these two broad categories, we surmise that infrastructure needs to be set up first & it will not be the same all the time. Before moving ahead on this, we need to answer other questions –

Question1: Why is the setting up of infrastructure the most critical & essential activity to start strategizing blockchain-based application testing?  

Question2: What all potential challenges do testers face while setting up infrastructure?

Question3: What all solutions do Testers have to overcome with Infrastructure setup?

To answer the first question:

We need to take a few steps backward to understand what we do for testing any software application in the traditional approach. For starting any software application testing, an environment has to be set up, but that is always a one-time activity until the development team does any significant change to the underlying platforms or technology. However, that happens very rarely. So testers can continue with testing without much worry about infrastructure.

The two core concepts of Blockchain technology are P2P networking & consensus algorithms. Testing these two components is heavily dependent on infrastructure setup, meaning how many nodes we need to test one feature or the other.

For P2P, we need a different number of connected nodes with an automated way to kill nodes randomly & then observe the behavior in each condition.

For Consensus, it depends on what kind of consensus is being used & based on the nature of consensus, different types of nodes, each with a different number of nodes will be needed.

Another differentiating factor that is not applicable for public blockchain but has a significant impact on a private blockchain is different types of permission to different nodes.

There is a frequent requirement to keep changing network topology for verifying most of the features of decentralized applications.

By now, we know how important it is to change network topology for each feature; otherwise, testing would not be as effective.  As we all know now, the blockchain network is a collection of many machines (a.k.a. nodes) connected with Peer To Peer networking. It is always a priority to automate the infrastructure required for mimicking the network topology that is needed for testing.

To answer the second question:

1. If we manually spin-off instances, let’s assume five instances, with required software & user setup, we need to spend almost 2–3 hours per instance.

2. Manually setting up machines is highly error-prone & mundane. Even simple automation does not help until the automation framework is intelligent enough to understand the need for different network topologies.

3. Due to agile methodology adoption, spending so much time setting up just infrastructure is not acceptable as the testing team usually does not have that length of time to complete testing for any given sprint.

4. All testers have to understand infrastructure setup as all need to change network topology to test most of the features. Finding a tester with good infrastructure knowledge is also challenging. 

5. Invalid network topology, most of the time, does not show an immediate failure due to the blockchain concept’s inherent nature. Eventually, an incorrect network topology leads to time and effort spent on non-productive testing without finding any potential bugs.

6. High defect rejection ratio by Dev, either due to incorrect network topology or due to incorrect peering among nodes

To answer the third question:

There are four ways to set up a network of nodes –

1. Physical machines with LAN

2. Virtual Machines

3. Docker Containers on the same machine can be considered as an isolated machine

4. Cloud Platform

We use Docker containers & cloud platforms to set up the infrastructure for testing blockchain-based applications as setting up physical machines, or virtual machines is not viable from a maintenance perspective. 

Physical machines with LAN: To set up a blockchain network with physical devices is tough & scalability is challenging since we need additional devices to achieve the desired testing quality. During Infrastructure testing, we need to make machines (a.k.a. Nodes) go up & down to verify the volatility. This is a cumbersome process for the tester as well. Setting up the network with physical devices requires physical space and need maintenance of these machines at regular intervals. We usually do not recommend going for this option; however, if a customer requires the testing to be done in such a manner, we can define a detailed strategy to execute it. 

Virtual Machines: Compared to a network of physical machines, virtual machines do have a lot of advantages. Increasing the number of VMs on an underlying device also highly complicates the matter since maintaining VMs is not user friendly. Another disadvantage is that we need to hardcode the limit of all the resources beforehand. However, combining Option1 and Option2  (multiple physical machines with multiple VMs on a single machine) seems to be a better choice, although it still requires lots of maintenance and carries overheads that act as a time sink for the tester. As reducing time to test is a critical aspect of quality delivery, we focus on saving as much time as possible to be invested in other higher-value elements of testing.

The advantage of using a cloud platform lies in the ability to spin off as many machines as needed without the overheads of maintenance or any other physical activity. However, it is still an uphill task to maintain such a network with multiple machines on the cloud too. Eventually, we thought of option 4 with Docker, and then we concluded that by combining option3 with option4, we could create a very solid strategy to perform infrastructure testing by overcoming various problems. 

Based on our real-world experiences, we tried various options individually, and in combination, our recommended approach is this process. 

Always perform Sanity/Smoke testing for any build with docker containers. Once all sanity/smoke tests finish without any failure, then switch to replicate the required network topology for functional testing of new enhancements & features.

The advantages of our approach are

1. Build failure issues can be found in less time & can be reported back to dev without any delay that cloud infrastructure introduces. Before taking this approach, we had to spend 2-3 hours to report any build failure bug, whereas the same can be caught in 5-10 minutes as we always run selective test cases under sanity/smoke. 

2. Saved the cost of cloud infrastructure in case of failure in the build, as there is no uptime in case of build failure. 

3. Saved a lot of time for the testing team to spend in infrastructure setup. 

4. Dev team gets more time to fix issues found in sanity/smoke testing as it gets reported in just a few minutes. 

5. Significant reduction in rejection of bugs by the development team. 

6. Timely delivery percentage for the build, without any major bug, has increased significantly. 

To dive into the testing strategy, using docker containers with cloud platform shall be covered in an upcoming blog, followed by our automation framework for the infrastructure setup testing. We will also try to define the answer for the most frequently asked questions by the customer –

Question1: Why should customers be concerned about Infrastructure testing for decentralized applications?

Question2: Why should customers look for an automated way of Infrastructure Setup testing for blockchain-based decentralized applications?

Stay tuned for the upcoming blogs and follow us on LinkedIn for timely updates.  

What are Smart Contracts?

Smart contracts are translations of an agreement, including terms and conditions, into a computational code. Blockchain follows the “No Central Authority” concept, and its primary purpose is to maintain transaction records without a central authority. To automate this process of recording transactions, Smart Contracts were constituted. Smart Contracts carry several beneficial traits, including automation, immutability, and a self-executing mode.

What is a DAML Smart Contract?

DAML is the open-source language from Digital Asset created to support DLT or distributed ledger technology. It allows multiple parties to do transactions in a secure and permissioned way. Using DAML as the coding language enables developers to focus only on building the smart contracts’ business logic rather than fretting about the nitty-gritties of underlying technology. DAML Smart Contracts run on various blockchain platforms and regular SQL database while providing the same logical privacy model and immutability guarantees.

A personal perspective

As a technology leader with years of experience, I remember the one line from my early days is the “Write Once Run Anywhere” slogan by Sun Microsystems to highlight the Java language’s cross-platform benefits.

I believe, in the coming years, DAML is the language that will enjoy similar popularity as Java due to its cross-platform benefits, ease of use, and versatility. DAML can revolutionize how business processes are managed by simplifying the contracts and making them ‘smarter.’

Comparing DAML

DAML V/S General Purpose Languages

Today, few popular general-purpose languages are in use for creating multi-party blockchain applications, i.e., Java, Go, and Kotlin.

All of these can also be used to create smart contracts. But the challenge lies in the sheer complexity of the task at hand.  The code that needs to be produced to build an effective smart contract, using these languages, is daunting. DAML can achieve the same result by writing 5-7 times less code in a much simpler manner. 

Smart Contract basic data types are contract-oriented (parties, observers, signatories/others), which is in direct contrast to the general-purpose languages (int/float/double). So, the very essence of smart contract languages such as DAML is one and only one – contracts, making it a superior choice for writing Smart Contracts. 

Comparison with Existing Smart Contract Languages

The domain of Smart Contracts is better handled by languages that have been purpose-built for smart contracts, like Solidity, DAML, AML/others. Among these smart contract languages, DAML is the only open-source and Write Once Run Anywhere (WORA). The DAML contract type is also private in nature. At a logical level, DAML has a strict policy. However, at the persistence level, different ledgers might implement privacy in different ways.

DAML for Private and Public Ledgers

The two types of ledgers, Private and Public serve different purposes and should be used accordingly. The underlying concept is that information on the ledgers is immutable once created.

Public Ledgers: Open to all, and anybody can join the system, and each member has access/read/write transactions. Examples: Bitcoin/ Ethereum/ others

Private Ledgers: Also known as permissioned networks or permissioned blockchains, have limitations in participation. It has higher security and limited permissions. Examples: Hyperledger Fabric and Corda. Some Private ledgers offer different privacy and data sharing settings, like Hyperledger sawtooth, although permissioned, allows all nodes to receive a copy of all transactions. 

DAML-Open source language allows the involved parties to do transactions in a secure and permissioned way. Thus, enabling developers to focus on the business logic rather than spending precious time in fine-tuning the underlying persistence technology.

At a logical level, DAML has a strict policy for permissioned access. However, at the persistence level, different ledgers might implement privacy in different ways. 

Sample of a Smart Contract: 

Reporting a trade transaction between 2 counter parties to a regulator or reviewing authority using Smart Contracts.

module Finance where

template Finance

  with

exampleParty : Party

exampleParty2 : Party

exampleParty3 : Party

regulator : Party

exampleParameter : Text

— more parameters here

  where

signatory exampleParty, exampleParty2

observer regulator

controller exampleParty can

   UpdateExampleParameter : ContractId Finance

     with

        newexampleParameter : Text

       do

         create this with

              exampleParameter = newexampleParameter

template name template keyword defines the parameters followed by the names of parameters and their types

template body where keyword can include:

template-local definitions let keyword

Let’s you make definitions that have access to the contract arguments and are available in the rest of the template definition.

signatories signatory keyword Required 

The parties (see the Party type) must consent to create an instance of this contract. You won’t be able to create an instance of this contract until all of these parties have authorized it.

observers observer keyword Optional. 

Parties that aren’t signatories but who you still want to be able to see this contract. For example, the SEC wants to know every contract created, and the SEC should be aware of this.

Optional: Text that describes the agreement that this contract represents.

Explanation of the code snippet

DAML is whitespace-aware and uses layout to structure blocks. Everything that is below the first line is indented and thus part of the template’s body.

The signatory keyword specifies the signatories of a contract instance. These are the parties whose authority is required to create the contract or archive it again – just like a real contract. Every contract must have at least one signatory.

Here the contract is created between two parties-Party1 and Party2, and the regulator is the observer. Every transaction done is visible to the observer, i.e., the regulator playing the role of Regulator (SEC) in this case and SEC can be looking at every transaction. So, smart contracts can be created in this space.

DAML disclosure policy ensures that Party3 will not be able to view the transactions as it is neither signatory, nor observer or controller, and it is just a party to the contract.

Here is a link to the repository provided by DAML which contains examples for several such use cases modeled in DAML.

  1. How to write smart contracts using DAML and various use-cases

https://github.com/digital-asset/ex-models

  1. Ledger implementation enabling DAML applications to run on Hyperledger Fabric 2.x

https://github.com/digital-asset/daml-on-fabric and for learning

Compilation and Deployment of DAML

DAML has both the language as well as the run time environments (in the form of libraries known as DAML SDK). Developers need to focus on writing smart contracts (in the way of language features provided by DAML SDK) without bothering about the underlying persistence layer. It also provides support for existing data structures (List/Map/Tuple) and also provides the functionality for creating a new data structure. 

Other notable features of DAML 

  • A .dar file is the result of compilation done through DAML Assistant, and eventually, .dar files are uploaded into the ledger so that the contracts can be created from the templates in the file. This .dar is made up of multiple .dalf files.  A .dalf file is the output of a compiled DAML package or library and it’s underlying format is DAML-LF.
  • Sandbox is a lightweight ledger (in-memory) implementation available only in the dev environment. 
  • Navigator is a tool for exploring what is there on the Ledger and it shows what contracts can be seen by different parties and submit commands on behalf of those parties.
  • DAML gives you the ability to deploy your smart contracts on the local ledger(in-memory) so that various scenarios can be easily tested. 

Testing DAML Smart Contracts

1) DAML has a built-in mechanism for testing templates called ‘scenarios’. Scenarios emulate the ledger. One can specify a linear sequence of actions that various parties take, and subsequently these are evaluated with the same consistency, authorization, and privacy rules as they would be on the sandbox ledger or ledger server. DAML Studio shows you the resulting transaction graph.

2) Magic FinServ launched its Test Automation suite called Intelligent Scenario Robot, or IsRobo™, an AI-driven Scenario Generator that helps developers test smart contracts written in DAML. It generates the unit test cases (negative and positive test cases) scenarios for the given smart-contract without any human intervention, purely based on AI.

Usage in Capital Markets

Smart contracts, in general, have excellent applications across the capital markets industry. I shall cover some use cases in subsequent blogs outlining how multi-party workflows within enterprises can benefit by minimizing reconciliations of data between them, and allow mutualization of the business process. Some popular applications currently being explored by Magic FinServ are: 

  • Onboarding KYC
  • Reference data management
  • Settlement and clearing for trades
  • Regulatory reporting
  • Option writing contracts (Derivatives industry)

Recent Noteworthy Implementations of DAML Smart Contracts are: 

  • International Swaps Derivatives Association (ISDA) is running a pilot for its Common Domain Model (CDM) for clearing of interest rate derivatives using a distributed ledger.
  • The Australian Stock Exchange (ASX) and Swiss investment bank UBS are continually providing the inputs to validate the CDM’s additional functionality alongside ISDA and Digital Asset.

DAML Certification process

To get hands-on experience with DAML, free access to docs.daml.com is available, where developers may learn from the study material/download the run time, and build sample programs. However, to reinforce learning and add this as a valuable skill, it is better to be a DAML-Certified Engineer. It is worth pursuing as the fee is reasonable and the benefits are manifold. There are not plenty of DAML developers available in the market, so it is a rather sought-after skill as well. 

Conclusion

DAML is ripe for revolutionizing the way business processes are managed and transactions are conducted. 

The smart contracts that are developed on open-source DAML can run on multiple DLTs / blockchains and databases without requiring any changes (write once, run anywhere). 

With the varied applications and relative ease of learning, DAML is surely emerging as a significant skill to add to your bouquet if you are a technologist in the capital markets domain. 

To explore the DAML applications with Magic FinServ, read more here

To schedule a demo, write to us mail@magicfinserv.com 

Until recently, your enterprise may have considered smart contracts as a tool to bridge silos from one organization to another – that is to establish external connectivity over Blockchain. However, what if we proposed applying the same concept so a firm can be instrumental in addressing enterprise-wide data reconciliation and system integration / consolidation challenges to expedite time to market and streamline (i.e internal, regulatory, FP&A, supplier risk) reporting. 

Afterall, about 70-80% of reconciliation activity takes place within the enterprise. The best part? A firm can do this with minimal disruption to its current application suite, operating system and tech stack. We will look at traditional approaches and explain how smart contracts are the way to get started on one of those journeys when one never looks back

To set the stage, let’s cover the self-evident truths. Reconciliation tools are expensive and third party tool implementations typically require multi year (and multi million dollar) investments. Over 70% of Reconciliation requirements are within the Enterprise amongst internal systems. Most reconciliation resolutions start with an unstructured data input (pdf/email/spreadsheet) which requires a manual review/scrubbing to be ingested easily. For mission critical processes, this “readiness of data” lag can result in delays and lost business, backlogs, unjustifiable cost and worst of all, regulatory penalties. 

Magic Finserv proposes a three-fold approach to take on this challenge. 

  1. Data readiness: Tackle the unstructured data solution using AI and ML utilities that can access data sources and ingest them into a structured format. Often Reconcilliation is necessary because of incorrect or incomplete data, ML can anticipate what is wrong / missing from past transactions and remediate. This is the Auto Reconciliation.
  2. Given unstructured data elements may reside in fragmented platforms or organizational silos, the Firm must have an intelligent way of integrating and mutualizing itself with minimal intervention. An ETL or data feed may look appealing initially, however, these are error prone and do not remediate the manual reconciliation tasks for exception management.  Alternatively, a smart contract based approach can streamline your rule-based processes to create a single data source. 
  3. Seamless integration to minimize the disconnect between applications. The goal is to create an environment where reconciliation is no longer required. Ideally.

We have partnered with Digital Asset to outline a solution that brings together an intelligent data extraction tool, a DAML smart contract and a capital markets focused integration partner that will reduce manual end to end reconciliation challenges for the enterprise.

Problem statement & Traditional Approach

Given that most enterprise business processes run through multiple disparate applications with their respective unique databases, it has been proven a monolithic application approach is close to impossible. And not recommended due to issues with a Monolithic application architecture. Traditionally, this challenge has been addressed using integration tools such as an Enterprise Service Bus, SOA, where the business gets consumed in the cycle of data aggregation, cleansing and reconciliation. Each database becomes a virtual pipeline of a business process and an additional staging layer is created to deliver internal/external analytics. In addition, these integration tools are not intelligent as they only capture workflows with adapters (ad hoc business logic) and do not offer privacy restrictions from the outset. 

Solution

The Digital Assets DAML on X initiative extends the concept of the Smart Contract onto multiple platforms including Databases. The DAML on X interfaces with the underlying Databases using standard interfacing protocols, the Smart Contract mutualizes the Data Validation rules as well as the access privileges. Once you create a DAML smart contract, the integrity of the process is built into the code itself, and the DAML runtime makes disparate communication seamless. It is in its DNA to function as a platform independent programming language specifically for multi-party applications.

Without replacing your current architecture such as the ESB, or your institutional vendor management tool of choice, use the DAML runtime to make application communication seamless and have your ESB invoke the necessary elements of your smart contract via exposed APIs .  

Handling Privacy, Entitlements & Identity Management

Every party in the smart contract has a “party ID” plugged in directly with your identity management solution that you are using institutionally. You can even embed “trustless authentication”. 

The idea is that entitlements/rights & obligations are baked directly into the language itself as opposed to a normal business process management tool where you build out your business process and then put the entitlements/ marry them in phase 3 of the process – only to realize that workflow needs to change. 

DAML handles this upfront – all of the authentication is taken care of the persistence layer/IDM that you decide on. The smart contract template represents a data scheme in a database and the Signatories/controllers in our example represent role-level permissioning of who can do what/when and who can see what/when

 The image below shows how the golden source of data is generated.


It is a purpose built product that contains automatic disclosures and privacy parameters out of the box. You don’t need to keep checking your code to see if the guy who is exercising command is actually allowed to see the data or not. All of this is within the scope of the DAML runtime. 

Already kickstarted your enterprise blockchain strategy?

Firstly, Amazing! Second, since DAML Smart contracts can run on databases or distributed ledgers of your choice (Fabric, Corda etc. ), it’s a unique solution that gives you the flexibility to get started with the application building and even change underlying ledgers at any point. You can also Integrate between multiple instances. I.e. If you are running a DAML app on Fabric and another DAML app on corda, both apps can talk to one another. 

The key takeaway here is that most enterprises are held up with determining which ledger meets their needs. With its intuitive business workflow focused approach, developing your DAML applications while you select your ledger fabric can expedite revenue capture, implement consistent enterprise reporting and reduce the burden of reconciliation – the smart contract through to the integration layer is completely portable. 

Security Token Offering seems to be the next hype to utilize BlockChain concepts and transform the current security instruments (Equity, Debt, Derivatives, etc) into digitized security. STOs have gained popularity and momentum in recent times due to lack of regulations in the ICO world with a lot of outliers for most of the ICOs in 2018. There are many Startups that are building platforms by utilizing programmable blockchain platforms like Ethereum. Recent developments in STO platforms seem to be moving in the direction of building an Ecosystem with defined standards as well. By seeing lots of traction on GitHub towards standards like ERC-1400, ERC-1410, ERC-1594, ERC-1643 and ERC-1644, it has given us the opportunity to think about how can a technology company like us (Specialized in ensuring Quality standards for Blockchain-based applications on Ethereum) can contribute to this. We started our journey in defining the Complete STOs processing cycle in the context of real-time usage (from a functional perspective) with underlying Ethereum platforms (from a technology perspective).

It is highly important that we list down all the major participants & their roles before actually defining the  STO lifecycle :

  1. Issuers – Legal entity who develops, registers & sells security for raising funds for their business expansion.
  2. Investors – Entities who are ready to invest in securities to expect financial returns.
  3. Legal & Compliance Delegates – Entities that ensure all participants & processes are complied within the defined rules & regulations by the jurisdiction.
  4. KYC/AML service providers – Entity which provide KYC/AML for required participants.
  5. Smart Contract Development communities like Developers, Smart Contract Auditors, QA Engineers.

Most of the companies claiming to provide STO platforms are using Ethereum as the underlying programmable blockchain platform with few exceptions. The rationale for using Ethereum as the first choice is – It is a Turing Complete platform to build complex decentralized applications by defining logics inside Smart contracts (Solidity is the most favorable programming language among developer communities). Parallelly, Ethereum is also getting matured, secured and improved on performance with scalability by introducing lots of new features and improvements. There are very few who are utilizing other platforms apart from Ethereum to build their own STO processing platforms and some of them are trying to build a completely new blockchain platform dedicatedly designed for STOs. The last approach seems to be too optimistic as it might take years to build such a  system whereas the current momentum around STOs does not seem to wait that long.

Basis the above, we can now define the generic STO lifecycle from a functional standpoint into 2 phases as below: Primary Market

  1. To issue STO by Issuer
  2. To invest in STO
  3. Secondary Market to trade STO on either on Exchanges or Over The Counter

Primary Market

  1. To issue STO by Issuer –
  1. Registration of Issuer
  2. Creation of STO Token
  3. Approval from Legal & Compliance for STO
  4. Issuance of STO post Legal & Compliance approval  
  1. To invest in STO by Investor –
  1. Registration of Investor
  2. KYC/AML for Investors
  3. Whitelisting of Investors for STOs post KYC/AML
  4. Investment in STO for allowed STOs based on whitelisting of corresponding STO

Before we actually get into the technical insight of underlying blockchain technology, we need to define the STO platform technical architecture from a  user perspective. Each STO platform that exists in any state in today’s world has –

  1. A Presentation layer (User Interface with any chosen front end technology, Integration with Wallets)
  2. A Business Layer (JS libraries to provide an interface to interact with Smart Contracts)
  3. A Data Layer (Ethereum data storage in blocks in the form of key-value storage)

Now let’s define a high-level overview from a technical standpoint by assuming that the STO platform is using Ethereum as an underlying blockchain platform ( assuming that the Backend Layer has been set up already) –

  1. Creation of an external account for all participants to bring everyone on Ethereum blockchain
  2. Defining transactions for Off-Chain and On-Chain for all activities defined for Issuer and Investors
  3. Merger of Off-Chain data with On-Chain data
  4. Develop Smart Contracts
    1. Standard smart contract to be built for each STO depending upon Jurisdictions for generic processes among required participants
    2. STO specific Smart Contract to be built for implementing business/regulation rules
    3. Smart contracts with all business logic especially for transaction processing

Based on the expertise of our group, Magic FinServ can contribute in a very big way in the development of Smart Contracts (Written in Solidity) along with Auditing of contracts.
For more details visit https://www.magicblockchainqa.com/our-services/#smart-contract-testing

In the next part, we will detail out all the above mentioned high-level technical overview with high-level functional overview followed by more insight on all these defined functional & technical flow.

When ERC-20 Standards came into existence, it eased down the ICO token interoperability across wallets & crypto exchanges for all ERC-20 compliant tokens. Having standards for any process not only helps to have bigger acceptance but also improves interoperability to build up an ecosystem. Being a technology obsessed firm, we’ve always encouraged standards to be in place. An acceptable standard not only helps developers (One of the strongest stakeholders in the ecosystem who have the responsibility to provide workable solutions by using available technology) to build  the ecosystem but also leads to minimal changes for implementing interoperability. In today’s world, there is no system in existence which does not raise any error /failure in real time usage . Using global standards provides us another vital advantage of finding a resolution for such errors/failures as  these cases would have already been resolved by the tech fraternity earlier.

Today, it is of utmost importance to have standards that can not only integrate multiple systems (STO Platforms, Wallets and Exchanges) with minimal changes but also make security tokens easily interoperable across wallets and exchanges. Security Token Offerings can’t be an exception for not having standards when they seem to have the biggest and most complicated technological advancement for transforming the existing world of security to Digitized security with automated processing over traditional blockchain technology.

The recent traction on ERC-1400 (now moved to ERC-1411) has helped towards defining standard libraries for the complete STO life cycle especially for on-chain/off-chain transactions This compilation of requirements has got the technology folks globally excited as this has the mettle to ease down the complete STO lifecycle. It completely makes sense that lots of individuals are very excited to see such a good compilation of requirements from various involved participants with probable interface that can ease down the complete STO lifecycle. Github, for instance has a lot of real time developers participating in discussions to share their experiences as well.

Ethereum Standards (ERC abbreviation of Ethereum Request for Comments) related to regulated tokens

The below standards are worth a read to understand in depth about the rationale behind targeting more regulated transactions based on Ethereum tokens –  

  1. ERC-1404 : Simple Restricted Token Standard
  2. ERC-1462 : Base Security Token
  3. ERC-884 : Delaware General Corporations Law (DGCL) compatible share token

ConsenSys claims to have implemented ERC-1400  on the Github repository & named the solution as Dauriel Network.  GitHub says, “Dauriel Network is an advanced institutional technology platform for issuing and exchanging tokenized financial assets, powered by the Ethereum blockchain.”  

ERC-1400 (Renamed to ERC-1411) Overview

Smart contracts  will eventually control all the activities like Security issuance process, trading lifecycle from an issuer & investor perspective as well as  event processing related to security token automatically. Let’s try to understand ERC-1400 standard libraries with respect to each activity for STO lifecycle :

  1. ERC-20: Token Standards
  2. ERC-777: A New Advanced Token Standards
  3. ERC 1410: Partially Fungible Token Standard
  4. ERC 1594: Core Security Token Standard
  5. ERC-1643: Document Management Standard
  6. ERC-1644: Controller Token Operation Standard
  7. ERC-1066: Standard way to design Ethereum Status Code (ESC)

All the defined methods inside each standard (Solidity Smart Contract Interfaces) at an activity level are (Pre MarketPrimary MarketSecondary Market)  and can be represented pictorially as below:

It is of utmost importance to distinguish Off-Chain & On-Chain activities  with those that will be processed outside STO platform before defining the mapping between standard libraries methods and activities across all 3 stages. Off Chain activities can be done outside the main chain of underlying blockchain platform then merged. However, Integration will be needed for all activities performed outside an STO platform where several standards (e.g. ERC-725 & ERC-735 define for Identity management) play an  important role.

All activities related to Pre-Market are supposed to happen outside the  STO platform as those are completely related to documentation like structuring the offering, preparing the  required documentation with all internal and external stakeholders including the legal team to ensure regulation compliances . To bring reference of all pre market documentation to the  STO platform, Cryptographic representation of all documentation can be used effectively.

Similarly, KYC/AML process can happen off-chain with proper integration on the STO platforms with proper identity management (standards around Identity management like ERC-725 and ERC-735).

ERC-1400 (now a.k.a ERC-1411) covers all activities related to primary and secondary market with proper integration to all off chain data which brings all related documentation/identity to the underlying blockchain platform on which the STO is designed.

Magic and its approach for defining ERC-1411 mapping

Team Magic is working continuously to define  the mapping between all defined methods with all real time activities of primary and secondary markets. A key part of our strategy is  to collect all requirements from various stakeholders like Security lawyers, Exchange Operators, KYA providers, Custodians, Business Owners, Regulators, Legal Advisor. Once we have all requirements collected then our experienced business analyst teams (Experts from Pricing, Corporate Actions, and Risk Assessment) take over and reconcile the requirements with ERC-1400 standards to not only map each requirement but also find out the gaps in the standards. Post this, our technology team  prepares the implementation strategy of all those standards by developing smart contracts in Solidity. Having an in-house developed smart contract for any specific case study (Provided by our Business Analyst team) helps us define Auditing of ERC-1400 specific smart contracts and the testing strategy for each contract as well. 

The original promise of blockchain technology was security. However, they might not be as invulnerable as initially thought. Smart contracts, the protocols which govern blockchain transactions, have yielded under targeted attacks in the past.

The intricacies of these protocols let programmers implement anything that the core system allows, which includes inserting loops in the code. The greater the options are given to programmers, the more the code needs to be structured. This makes it more likely for security vulnerabilities to enter blockchain-based environments.

The Attacks that Plague Blockchain

Faulty blockchain coding can give rise to several vulnerabilities. For instance, during Ethereum’s Constantinople upgrade in January of this year, reentrancy attacks became a cause for concern. These are possibly the most notorious among all blockchain attacks. A smart contract may interface with an external smart contract by ‘calling it’. This is an external call. Reentrancy attacks exploit malicious code in the external contract to withdraw money from the original smart contract. A similar flaw was first revealed during the 2016 DAO attack, where hackers drained $50 million from a Decentralized Autonomous Organization (DAO). Note the following token contract, from programmer Peter Borah, of what appears to be a great endeavor at condition-oriented programming:

contract TokenWithInvariants {   

mapping(address => uint) public balanceOf;

uint public totalSupply;

   modifier checkInvariants {

          _
         if (this.balance < totalSupply) throw;

}

   function deposit (uint amount) checkInvariants {

     balanceOf[msg.sender] += amount;

     totalSupply += amount;

  }

  function transfer(address to, uint value) checkInvariants {

        if (balanceOf[msg.sender] >= value) {

        balanceOf[to] += value;

        balanceOf-msg.sender] -= value;

  }

  }

  function withdraw() checkInvariants {

      uint balance = balanceOf[msg.sender];

      if (msg.sender.call.value(balance) ()) {

        totalSupply -= balance;

        balanceOf[msg.sender] = 0;

The above contract executes state-changing operations after an external call. It neither carries out an external call at the end nor does it have a mutex to prevent reentrant calls. The code does perform excellently in some areas, such as checking for a global invariant wherein the contract balance (this.balance) should not be below what the contract perceives it to be (totalSupply). However, these invariant checks are done at function entry in function modifiers, thereby treating a global invariant as a post-condition rather than holding it at all times. The deposit function is also flawed since it considers the user-mandated amount(msg.sender) instead of msg.amount.

Finally, the seventh line has a bug in it. Instead of,

if (this.balance < totalSupply) throw;

It should be,

if (this.balance != totalSupply) throw;

This is so because instead of checking for a stronger condition, we are now confirming a somewhat weaker condition of the contract’s actual balance being higher than what it thinks it should be.

These issues enable the contract to stock more money than it should. An attacker can potentially withdraw more than their share, heightening the danger of reentrancy even when the contract codes are watertight.

Overflows and underflows are also significant vulnerabilities that can be used as a Trojan Horse by non-ethical hackers. An overflow error occurs when a number gets incremented above its maximum value. Think of odometers in cars where the distance gets reset to zero after surpassing, say 999,999 km. If we affirm a uint8 variable that can take up to 8 bits, it can have decimal numbers between 0 and 2^8-1 = 255. Now if we code as such,uint a = 255;a++;

Then this will lead to an overflow error since a’s maximum value is 255.

On the other end, underflow errors effect smart contracts in the exact opposite direction. Taking an uint8 variable again:unint8 a = 0;a-;

Now we have effected an underflow, which will make a assume a maximum value of 255.

Underflow errors are more probable, since users are less likely to possess a large quantity of tokens. The Proof of Weak Hands Coin (POWH) scheme by 4chan’s business and finance imageboard /biz/ suffered a $800k loss overnight in March 2018 because of an underflow attack. Building and auditing secure mathematical libraries that replace the customary arithmetic operators is a sensible defense for these attacks.

The 51% attack is also prevalent in the world of cryptocurrency. A group of miners control more than 50% of the mining hashrate on the network and control all new transactions. Similarly, external contract referencing exploits Ethereum’s ability to reuse code from and interact with already existing contracts by masking malevolent actors in these interactions.

Smart contract auditing that combines the attention of manual code analysis and the efficiency of automated analysis is indispensable in preventing such attacks.
Solving the Conundrum
Fixes to such security risks in blockchain-based environments are very much possible. A process-oriented approach is a must with agile quality assurance (QA) models. Robust automation frameworks are also crucial in weeding out errors in coding and therefore strengthening smart contracts in the process.

In the case of reentrancy attacks, avoiding external calls is a good first step. So is inserting a mutex, a state variable to lock the contract during code execution. This will block reentry calls. All logic that changes state variables should occur before an external call. Correct auditing in this instance will ensure these steps are followed. In the case of overflow and underflow attacks, the right auditing tools will build mathematical libraries for safe math operations. The SafeMath library on Solidity is a good example.

To prevent external contract referencing, even something as simple as using the ‘new’ keyword to create contracts may not be implemented in the absence of proper auditing. Incidentally, this one step can ensure that an instance of the referred contract is formed during the time of execution, and the attacker cannot replace the original contract with anything else without changing the smart contract itself.

Magic BlockchainQA’s pioneering QA model has created industry-leading service level agreements (SLAs). Our portfolio of auditing services leverage our expertise in independently verifying blockchain platforms. This ensures decreased losses on investments for fintech firms, along with end-to-end integration, security, and performance. Crucially, this will usher in widespread acceptance of blockchain-based platforms. With the constant evolution of blockchain-based  environments, we are constantly evolving as well, to tackle new challenges and threats, while ensuring that our tools can conduct impeccable auditing of these contracts.

Blockchain technology first came with the promise of unprecedented security. Through correct auditing practices, we can fulfill this original promise. At Magic BlockchainQA’s, we aim to take that promise to its completion every single time.

Get Insights Straight Into Your Inbox!

    CATEGORY