Consensus in Hyperledger Fabric: Migrating from Kafka to Raft

by Hleb IodaAugust 27, 2019
These detailed step-by-step instructions explain how to launch a network, determine the number of channels to modify, enable the maintenance mode, etc.

A new ordering service available

Kafka-based consensus (ordering service) poses a viable fault-tolerant solution for Hyperlуdger Fabric to be used in a production-grade network. Since version 1.4.1, the Fabric introduced the Raft ordering service. With this option, you get a distributed fault-tolerant service, which is easier to set up and maintain. It can extend a network with new organizations, as we don’t need to rely on a third-party Kafka cluster.

In the previous versions of the platform, it was impossible to change a consensus type of a blockchain network without full redeployment. However, Hyperledger Fabric v1.4.2 introduced a mechanism that makes it possible to migrate a network from a Kafka consensus to a Raft-based one.

The official documentation for this version describes the migration process from a high-level perspective, assuming that a user has sufficient expertise around channel configuration update transactions. So, we decided to provide a more detailed, step-by-step tutorial—exemplified on the “Building Your First Network” (BYFN) scenario. We also deliver recommendations on configuring the Raft ordering service and testing chaincode invocation.

To follow the instructions below, you need to be familiar with the basics of Hyperledger Fabric architecture, as well as Docker Compose. Before proceeding with our tutorial, please check out the official prerequisites.

 

Launch a network with a Kafka orderer

With the following commands, we will clone a repository with the BYFN example and download Docker’s Hyperledger Fabric images and binary files to create our blockchain network.

We recommend to mount workdirectory of the CLI container to your host machine. It will help us to edit configuration files with some nice GUI text editors or easily copy configs from a remote to a local machine.

So, open first-network/docker-compose-cli.yml. It should look like in the screenshot below.

Add the string as below.

After that, we may finally launch the network.

 

Determine the amount of channels to modify

First, we need to know a system channel name. In the BYFN scenario, it’s called byfn-sys-channel, while the default name is testchainid, so we need to know what name exactly your network has. The easiest way to do it is to check orderer logs.

By running this command, you will get the output with the system channel name as highlighted in the screenshot below.

Remember this value, we will need it in the future.

After that, we need a full list of channels inside your Hyperledger Fabric network. To obtain it, use the peer command in the CLI container. In our case, we will have only one additional channel to modify, it’s called mychannel.

After running the command, you will get the following output.

Please be aware that the peer you’ll have credentials for may not be joined to all of the network channels. So, this may be performed multiple times with several different access rights and configurations.

 

Put the network into the maintenance mode

Make sure that you have logged into the CLI container. If not, log in with the following command.

In the next section, we will fetch configuration from our working channel called mychannel. Make sure that CHANNEL_NAME equals to your actual channel name we determined in the previous step.

Next, we’ll need to determine how an orderer organization MSP (Membership Service Provider) is called. Usually, it is specified in configtx.yml and equals to OrdererMSP.

After running the command, we get the following output.

Remember this value, we will need it later.

Then, we need to modify channel configurations by putting them into the maintenance mode. Make sure that CORE_PEER_LOCALMSPID has the value you’ve obtained in the previous step, as well as channel_id points to the actual name of the channel we are modifying now.

A successful channel update will look like shown below.

Repeat these steps for all of your channels.

Next, we need to put a system channel—in our case, it’s byfn-system-channel—to maintenance. The steps are pretty much the same, so, make sure all the channel names and orderer msp are configured correctly, which should be performed in the CLI container.

After all the configuration updates were performed successfully, log out from the CLI container.

Then, restart all the containers.

 

Migrate, actually

Log in back to the CLI container.

Then, make sure that your code has the following properties:

  • export CHANNEL_NAME=mychannel
  • export CORE_PEER_LOCALMSPID="OrdererMSP"
  • echo
    '{"payload":{"header":{"channel_header":{"channel_id":"mychannel", "type":2}},"data":{"config_update":'$(cat config_update.json)'}}}' | jq . > config_update_envelope.json

Actually, those are the values we determined earlier. If everything is okey, you can start the migration.

Open config_mod.json in the editor of choice. It should be located in the workdir/switch_to_raft_mychannel directory of your repository, as we mounted it earlier. Find the ConsensusType block. We need to modify the metadata and type fields to make them look similar to what is displayed below.

"ConsensusType": {
            "mod_policy": "Admins",
            "value": {
              "metadata": {
                "consenters": [
                  {
                    "client_tls_cert": "LS0tLS1<…>LS0tLQo=",
                    "host": "orderer.example.com",
                    "port": 7050,
                    "server_tls_cert": "LS0tLS1<...>tLQo="
                  }
                ],
                "options": {
                  "election_tick": 10,
                  "heartbeat_tick": 1,
                  "max_inflight_blocks": 5,
                  "snapshot_interval_size": 20971520,
                  "tick_interval": "500ms"
                }
              },
              "state": "STATE_MAINTENANCE",
              "type": "etcdraft"
            },
            "version": "1"
          }

The client_tls_cert and server_tls_cert fields are actually equal and should contain the base64-encoded orderer service certificate. You can obtain this value by executing the following command in the CLI container we’ve already opened.

After the modification is done, save the file and go back to your CLI container console. Check that you have a correct channel ID in these properties:

  • echo
    '{"payload":{"header":{"channel_header":{"channel_id":"mychannel", "type":2}},"data":{"config_update":'$(cat config_update.json)'}}}' | jq . > config_update_envelope.json
  • CORE_PEER_LOCALMSPID

The following command will perform a channel configuration update after encoding channel JSON configurations to the protobuf files.

Repeat these steps for all your channels. Next, we need to perform absolutely the same changes in our system channel.

Make sure to adjust config_mod.json as desplayed below.

Finally, perform the system channel upgrade, but don’t forget to check channel names and orderer msp.

If everything has been done properly and you passed the step without any error, log out from the CLI container.

Then, restart the network.

After 10–20 seconds, check orderer logs to understand whether migration was completed successfully—by running the following command.

Below, you can see the output provided by Docker logs.

 

Disable the maintenance mode

After migration is completed, we can disable the maintenance mode and get rid of the Kafka cluster. Run the following commands to disable the maintenance mode on the working channel, but, first make sure that channel names and orderer msp are specified correctly.

Log in to the CLI container.

Then, submit the channel update.

Repeat for all the channels. Next, we need to switch off the maintenance mode for the system channel.

After that, log out from the CLI container.

Then, restart the containers.

 

Execute the chaincode

To test whether the network actually operates, you can execute the chaincode (mycc in our case) with the following commands.

Log in to the CLI container.

Then, finally, submit the channel update.

So, this is all for the process behind migration from Kafka-based consensus to a single-node Raft. To tune the ordering service and the amount of orderers, refer to the official guide. For the source code used in this tutorial, explore our GitHub repository.

 

Further reading

 

About the author

Hleb Ioda is Blockchain DevOps Engineer at Altoros with 5+ years of experience with network engineering. He specializes in network configuration and maintenance, as well as continuous integration and delivery. Hleb’s interests include deployment automation, cloud-native apps, blockchain, and distributed software. He is also skilled in working with GNU/Linux and various cloud providers.

The post was written by Hleb Ioda; edited and published by Sophia Turol and Alex Khizhniak.