Hyperledger Fabric PDF
Hyperledger Fabric PDF
Hyperledger Fabric PDF
Release master
hyperledger
1 Introduction 3
3 Release notes 11
4 Key Concepts 13
5 Getting Started 85
6 Developing Applications 91
7 Tutorials 143
13 Glossary 427
14 Releases 437
16 Status 441
i
ii
hyperledger-fabricdocs Documentation, Release master
Enterprise grade permissioned distributed ledger platform that offers modularity and versatility for a broad set of
industry use cases.
Contents 1
hyperledger-fabricdocs Documentation, Release master
2 Contents
CHAPTER 1
Introduction
In general terms, a blockchain is an immutable transaction ledger, maintained within a distributed network of peer
nodes. These nodes each maintain a copy of the ledger by applying transactions that have been validated by a consensus
protocol, grouped into blocks that include a hash that bind each block to the preceding block.
The first and most widely recognized application of blockchain is the Bitcoin cryptocurrency, though others have
followed in its footsteps. Ethereum, an alternative cryptocurrency, took a different approach, integrating many of the
same characteristics as Bitcoin but adding smart contracts to create a platform for distributed applications. Bitcoin
and Ethereum fall into a class of blockchain that we would classify as public permissionless blockchain technology.
Basically, these are public networks, open to anyone, where participants interact anonymously.
As the popularity of Bitcoin, Ethereum and a few other derivative technologies grew, interest in applying the underlying
technology of the blockchain, distributed ledger and distributed application platform to more innovative enterprise
use cases also grew. However, many enterprise use cases require performance characteristics that the permissionless
blockchain technologies are unable (presently) to deliver. In addition, in many use cases, the identity of the participants
is a hard requirement, such as in the case of financial transactions where Know-Your-Customer (KYC) and Anti-Money
Laundering (AML) regulations must be followed.
For enterprise use, we need to consider the following requirements:
• Participants must be identified/identifiable
• Networks need to be permissioned
• High transaction throughput performance
• Low latency of transaction confirmation
• Privacy and confidentiality of transactions and data pertaining to business transactions
While many early blockchain platforms are currently being adapted for enterprise use, Hyperledger Fabric has been
designed for enterprise use from the outset. The following sections describe how Hyperledger Fabric (Fabric) differ-
entiates itself from other blockchain platforms and describes some of the motivation for its architectural decisions.
3
hyperledger-fabricdocs Documentation, Release master
Hyperledger Fabric is an open source enterprise-grade permissioned distributed ledger technology (DLT) platform,
designed for use in enterprise contexts, that delivers some key differentiating capabilities over other popular distributed
ledger or blockchain platforms.
One key point of differentiation is that Hyperledger was established under the Linux Foundation, which itself has a
long and very successful history of nurturing open source projects under open governance that grow strong sustaining
communities and thriving ecosystems. Hyperledger is governed by a diverse technical steering committee, and the Hy-
perledger Fabric project by a diverse set of maintainers from multiple organizations. It has a development community
that has grown to over 35 organizations and nearly 200 developers since its earliest commits.
Fabric has a highly modular and configurable architecture, enabling innovation, versatility and optimization for a
broad range of industry use cases including banking, finance, insurance, healthcare, human resources, supply chain
and even digital music delivery.
Fabric is the first distributed ledger platform to support smart contracts authored in general-purpose programming
languages such as Java, Go and Node.js, rather than constrained domain-specific languages (DSL). This means that
most enterprises already have the skill set needed to develop smart contracts, and no additional training to learn a new
language or DSL is needed.
The Fabric platform is also permissioned, meaning that, unlike with a public permissionless network, the participants
are known to each other, rather than anonymous and therefore fully untrusted. This means that while the participants
may not fully trust one another (they may, for example, be competitors in the same industry), a network can be operated
under a governance model that is built off of what trust does exist between participants, such as a legal agreement or
framework for handling disputes.
One of the most important of the platform’s differentiators is its support for pluggable consensus protocols that
enable the platform to be more effectively customized to fit particular use cases and trust models. For instance, when
deployed within a single enterprise, or operated by a trusted authority, fully byzantine fault tolerant consensus might
be considered unnecessary and an excessive drag on performance and throughput. In situations such as that, a crash
fault-tolerant (CFT) consensus protocol might be more than adequate whereas, in a multi-party, decentralized use case,
a more traditional byzantine fault tolerant (BFT) consensus protocol might be required.
Fabric can leverage consensus protocols that do not require a native cryptocurrency to incent costly mining or to
fuel smart contract execution. Avoidance of a cryptocurrency reduces some significant risk/attack vectors, and absence
of cryptographic mining operations means that the platform can be deployed with roughly the same operational cost
as any other distributed system.
The combination of these differentiating design features makes Fabric one of the better performing platforms avail-
able today both in terms of transaction processing and transaction confirmation latency, and it enables privacy and
confidentiality of transactions and the smart contracts (what Fabric calls “chaincode”) that implement them.
Let’s explore these differentiating features in more detail.
1.2 Modularity
Hyperledger Fabric has been specifically architected to have a modular architecture. Whether it is pluggable con-
sensus, pluggable identity management protocols such as LDAP or OpenID Connect, key management protocols or
cryptographic libraries, the platform has been designed at its core to be configured to meet the diversity of enterprise
use case requirements.
At a high level, Fabric is comprised of the following modular components:
• A pluggable ordering service establishes consensus on the order of transactions and then broadcasts blocks to
peers.
4 Chapter 1. Introduction
hyperledger-fabricdocs Documentation, Release master
• A pluggable membership service provider is responsible for associating entities in the network with crypto-
graphic identities.
• An optional peer-to-peer gossip service disseminates the blocks output by ordering service to other peers.
• Smart contracts (“chaincode”) run within a container environment (e.g. Docker) for isolation. They can be
written in standard programming languages but do not have direct access to the ledger state.
• The ledger can be configured to support a variety of DBMSs.
• A pluggable endorsement and validation policy enforcement that can be independently configured per applica-
tion.
There is fair agreement in the industry that there is no “one blockchain to rule them all”. Hyperledger Fabric can be
configured in multiple ways to satisfy the diverse solution requirements for multiple industry use cases.
In a permissionless blockchain, virtually anyone can participate, and every participant is anonymous. In such a context,
there can be no trust other than that the state of the blockchain, prior to a certain depth, is immutable. In order
to mitigate this absence of trust, permissionless blockchains typically employ a “mined” native cryptocurrency or
transaction fees to provide economic incentive to offset the extraordinary costs of participating in a form of byzantine
fault tolerant consensus based on “proof of work” (PoW).
Permissioned blockchains, on the other hand, operate a blockchain amongst a set of known, identified and often vetted
participants operating under a governance model that yields a certain degree of trust. A permissioned blockchain
provides a way to secure the interactions among a group of entities that have a common goal but which may not fully
trust each other. By relying on the identities of the participants, a permissioned blockchain can use more traditional
crash fault tolerant (CFT) or byzantine fault tolerant (BFT) consensus protocols that do not require costly mining.
Additionally, in such a permissioned context, the risk of a participant intentionally introducing malicious code through
a smart contract is diminished. First, the participants are known to one another and all actions, whether submitting
application transactions, modifying the configuration of the network or deploying a smart contract are recorded on the
blockchain following an endorsement policy that was established for the network and relevant transaction type. Rather
than being completely anonymous, the guilty party can be easily identified and the incident handled in accordance
with the terms of the governance model.
A smart contract, or what Fabric calls “chaincode”, functions as a trusted distributed application that gains its secu-
rity/trust from the blockchain and the underlying consensus among the peers. It is the business logic of a blockchain
application.
There are three key points that apply to smart contracts, especially when applied to a platform:
• many smart contracts run concurrently in the network,
• they may be deployed dynamically (in many cases by anyone), and
• application code should be treated as untrusted, potentially even malicious.
Most existing smart-contract capable blockchain platforms follow an order-execute architecture in which the consen-
sus protocol:
• validates and orders transactions then propagates them to all peer nodes,
• each peer then executes the transactions sequentially.
The order-execute architecture can be found in virtually all existing blockchain systems, ranging from pub-
lic/permissionless platforms such as Ethereum (with PoW-based consensus) to permissioned platforms such as Ten-
dermint, Chain, and Quorum.
Smart contracts executing in a blockchain that operates with the order-execute architecture must be deterministic;
otherwise, consensus might never be reached. To address the non-determinism issue, many platforms require that the
smart contracts be written in a non-standard, or domain-specific language (such as Solidity) so that non-deterministic
operations can be eliminated. This hinders wide-spread adoption because it requires developers writing smart contracts
to learn a new language and may lead to programming errors.
Further, since all transactions are executed sequentially by all nodes, performance and scale is limited. The fact that
the smart contract code executes on every node in the system demands that complex measures be taken to protect the
overall system from potentially malicious contracts in order to ensure resiliency of the overall system.
Fabric introduces a new architecture for transactions that we call execute-order-validate. It addresses the resiliency,
flexibility, scalability, performance and confidentiality challenges faced by the order-execute model by separating the
transaction flow into three steps:
• execute a transaction and check its correctness, thereby endorsing it,
• order transactions via a (pluggable) consensus protocol, and
• validate transactions against an application-specific endorsement policy before committing them to the ledger
This design departs radically from the order-execute paradigm in that Fabric executes transactions before reaching
final agreement on their order.
In Fabric, an application-specific endorsement policy specifies which peer nodes, or how many of them, need to vouch
for the correct execution of a given smart contract. Thus, each transaction need only be executed (endorsed) by the
subset of the peer nodes necessary to satisfy the transaction’s endorsement policy. This allows for parallel execution
increasing overall performance and scale of the system. This first phase also eliminates any non-determinism, as
inconsistent results can be filtered out before ordering.
Because we have eliminated non-determinism, Fabric is the first blockchain technology that enables use of standard
programming languages. In the 1.1.0 release, smart contracts can be written in either Go or Node.js, while there are
plans to support other popular languages including Java in subsequent releases.
As we have discussed, in a public, permissionless blockchain network that leverages PoW for its consensus model,
transactions are executed on every node. This means that neither can there be confidentiality of the contracts them-
selves, nor of the transaction data that they process. Every transaction, and the code that implements it, is visible to
every node in the network. In this case, we have traded confidentiality of contract and data for byzantine fault tolerant
consensus delivered by PoW.
This lack of confidentiality can be problematic for many business/enterprise use cases. For example, in a network of
supply-chain partners, some consumers might be given preferred rates as a means of either solidifying a relationship,
or promoting additional sales. If every participant can see every contract and transaction, it becomes impossible to
maintain such business relationships in a completely transparent network – everyone will want the preferred rates!
As a second example, consider the securities industry, where a trader building a position (or disposing of one) would
not want her competitors to know of this, or else they will seek to get in on the game, weakening the trader’s gambit.
6 Chapter 1. Introduction
hyperledger-fabricdocs Documentation, Release master
In order to address the lack of privacy and confidentiality for purposes of delivering on enterprise use case require-
ments, blockchain platforms have adopted a variety of approaches. All have their trade-offs.
Encrypting data is one approach to providing confidentiality; however, in a permissionless network leveraging PoW
for its consensus, the encrypted data is sitting on every node. Given enough time and computational resource, the
encryption could be broken. For many enterprise use cases, the risk that their information could become compromised
is unacceptable.
Zero knowledge proofs (ZKP) are another area of research being explored to address this problem, the trade-off here
being that, presently, computing a ZKP requires considerable time and computational resources. Hence, the trade-off
in this case is performance for confidentiality.
In a permissioned context that can leverage alternate forms of consensus, one might explore approaches that restrict
the distribution of confidential information exclusively to authorized nodes.
Hyperledger Fabric, being a permissioned platform, enables confidentiality through its channel architecture. Basically,
participants on a Fabric network can establish a “channel” between the subset of participants that should be granted
visibility to a particular set of transactions. Think of this as a network overlay. Thus, only those nodes that participate in
a channel have access to the smart contract (chaincode) and data transacted, preserving the privacy and confidentiality
of both.
To improve upon its privacy and confidentiality capabilities, Fabric has added support for private data and is working
on zero knowledge proofs (ZKP) available in the future. More on this as it becomes available.
The ordering of transactions is delegated to a modular component for consensus that is logically decoupled from
the peers that execute transactions and maintain the ledger. Specifically, the ordering service. Since consensus is
modular, its implementation can be tailored to the trust assumption of a particular deployment or solution. This
modular architecture allows the platform to rely on well-established toolkits for CFT (crash fault-tolerant) or BFT
(byzantine fault-tolerant) ordering.
In the currently available releases, Fabric offers a CFT ordering service implemented with Kafka and Zookeeper. In
subsequent releases, Fabric will deliver a Raft consensus ordering service implemented with etcd/Raft and a fully
decentralized BFT ordering service.
Note also that these are not mutually exclusive. A Fabric network can have multiple ordering services supporting
different applications or application requirements.
Performance of a blockchain platform can be affected by many variables such as transaction size, block size, network
size, as well as limits of the hardware, etc. The Hyperledger community is currently developing a draft set of measures
within the Performance and Scale working group, along with a corresponding implementation of a benchmarking
framework called Hyperledger Caliper.
While that work continues to be developed and should be seen as a definitive measure of blockchain platform per-
formance and scale characteristics, a team from IBM Research has published a peer reviewed paper that evaluated
the architecture and performance of Hyperledger Fabric. The paper offers an in-depth discussion of the architec-
ture of Fabric and then reports on the team’s performance evaluation of the platform using a preliminary release of
Hyperledger Fabric v1.1.
The benchmarking efforts that the research team did yielded a significant number of performance improvements for
the Fabric v1.1.0 release that more than doubled the overall performance of the platform from the v1.0.0 release levels.
1.9 Conclusion
Any serious evaluation of blockchain platforms should include Hyperledger Fabric in its short list.
Combined, the differentiating capabilities of Fabric make it a highly scalable system for permissioned blockchains
supporting flexible trust assumptions that enable the platform to support a wide range of industry use cases ranging
from government, to finance, to supply-chain logistics, to healthcare and so much more.
More importantly, Hyperledger Fabric is the most active of the (currently) ten Hyperledger projects. The community
building around the platform is growing steadily, and the innovation delivered with each successive release far out-
paces any of the other enterprise blockchain platforms.
1.10 Acknowledgement
The preceding is derived from the peer reviewed “Hyperledger Fabric: A Distributed Operating System for Per-
missioned Blockchains” - Elli Androulaki, Artem Barger, Vita Bortnikov, Christian Cachin, Konstantinos Christidis,
Angelo De Caro, David Enyeart, Christopher Ferris, Gennady Laventman, Yacov Manevich, Srinivasan Muralidharan,
Chet Murthy, Binh Nguyen, Manish Sethi, Gari Singh, Keith Smith, Alessandro Sorniotti, Chrysoula Stathakopoulou,
Marko Vukolic, Sharon Weed Cocco, Jason Yellick
8 Chapter 1. Introduction
CHAPTER 2
Hyperledger Fabric has matured since the initial v1.0 release, and so has the community of Fabric operators. The
Fabric developers have been working with network operators to deliver v1.4 with a focus on stability and production
operations. As such, v1.4 will be our first long term support release.
Our policy to date has been to provide bug fix (patch) releases for our most recent major or minor release until the
next major or minor release has been published. We plan to continue this policy for subsequent releases. However,
for Hyperledger Fabric v1.4, the Fabric maintainers are pledging to provide bug fixes for a period of one year from
the date of release. This will likely result in a series of patch releases (v1.4.1, v1.4.2, . . . ), where multiple fixes are
bundled into a patch release.
If you are running with Hyperledger Fabric v1.4, you can be assured that you will be able to safely upgrade to any of
the subsequent patch releases. In the advent that there is need of some upgrade process to remedy a defect, we will
provide that process with the patch release.
As more Hyperledger Fabric networks get deployed and enter a production state, serviceability and operational aspects
are critical. Fabric v1.4 takes a giant leap forward with logging improvements, health checks, and operational metrics.
As such, Fabric v1.4 is the recommended release for production operations.
• The Operations Service: The new RESTful operations service provides operators with three services to monitor
and manage peer and orderer node operations:
– The logging /logspec endpoint allows operators to dynamically get and set logging levels for the peer
and orderer nodes.
– The /healthz endpoint allows operators and container orchestration services to check peer and orderer
node liveness and health.
– The /metrics endpoint allows operators to utilize Prometheus to pull operational metrics from peer and
orderer nodes. Metrics can also be pushed to StatsD.
9
hyperledger-fabricdocs Documentation, Release master
Writing decentralized applications has just gotten easier. Programming model improvements in the Node.js SDK and
Node.js chaincode makes the development of decentralized applications more intuitive, allowing you to focus on your
application logic. The existing npm packages are still available for use, while the new npm packages provide a layer
of abstraction to improve developer productivity and ease of use.
New documentation helps you understand the various aspects of creating a decentralized application for Hyperledger
Fabric, using a commercial paper business network scenario.
• The scenario: Describes a hypothetical business network involving six organizations who want to build an
application to transact together that will serve as a use case to describe the programming model.
• Analysis: Describes the structure of a commercial paper and how transactions affect it over time. Demonstrates
that modeling using states and transactions provides a precise way to understand and model the decentralized
business process.
• Process and Data Design: Shows how to design the commercial paper processes and their related data structures.
• Smart Contract Processing: Shows how a smart contract governing the decentralized business process of issuing,
buying and redeeming commercial paper should be designed.
• Application Conceptually describes a client application that would leverage the smart contract described in
Smart Contract Processing.
• Application design elements: Describes the details around contract namespaces, transaction context, transaction
handlers, connection profiles, connection options, wallets, and gateways.
And finally, a tutorial and sample that brings the commercial paper scenario to life:
• Commercial paper tutorial
• Writing Your First Application: This tutorial has been updated to leverage the improved Node.js SDK and chain-
code programming model. The tutorial has both JavaScript and Typescript examples of the client application
and chaincode.
• Commercial paper tutorial As mentioned above, this is the tutorial that accompanies the new Developing Ap-
plications documentation.
• Upgrading to the Newest Version of Fabric: Leverages the network from Building Your First Network to demon-
strate an upgrade from v1.3 to v1.4. Includes both a script (which can serve as a template for upgrades), as well
as the individual commands so that you can understand every step of an upgrade.
• Private Data: The Private data feature has been a part of Fabric since v1.2, and this release debuts two new
enhancements:
– Reconciliation, which allows peers for organizations that are added to private data collections to retrieve
the private data for prior transactions to which they now are entitled.
– Client access control to automatically enforce access control within chaincode based on the client orga-
nization collection membership without having to write specific chaincode logic.
Release notes
The release notes provide more details for users moving to the new release, along with a link to the full release change
log.
• Fabric release notes.
• Fabric CA release notes.
11
hyperledger-fabricdocs Documentation, Release master
Key Concepts
4.1 Introduction
Hyperledger Fabric is a platform for distributed ledger solutions underpinned by a modular architecture delivering high
degrees of confidentiality, resiliency, flexibility, and scalability. It is designed to support pluggable implementations
of different components and accommodate the complexity and intricacies that exist across the economic ecosystem.
We recommend first-time users begin by going through the rest of the introduction below in order to gain familiarity
with how blockchains work and with the specific features and components of Hyperledger Fabric.
Once comfortable — or if you’re already familiar with blockchain and Hyperledger Fabric — go to Getting Started
and from there explore the demos, technical specifications, APIs, etc.
A Distributed Ledger
At the heart of a blockchain network is a distributed ledger that records all the transactions that take place on the
network.
A blockchain ledger is often described as decentralized because it is replicated across many network participants,
each of whom collaborate in its maintenance. We’ll see that decentralization and collaboration are powerful attributes
that mirror the way businesses exchange goods and services in the real world.
13
hyperledger-fabricdocs Documentation, Release master
In addition to being decentralized and collaborative, the information recorded to a blockchain is append-only, using
cryptographic techniques that guarantee that once a transaction has been added to the ledger it cannot be modified.
This property of “immutability” makes it simple to determine the provenance of information because participants can
be sure information has not been changed after the fact. It’s why blockchains are sometimes described as systems of
proof.
Smart Contracts
To support the consistent update of information — and to enable a whole host of ledger functions (transacting, query-
ing, etc) — a blockchain network uses smart contracts to provide controlled access to the ledger.
Smart contracts are not only a key mechanism for encapsulating information and keeping it simple across the network,
they can also be written to allow participants to execute certain aspects of transactions automatically.
A smart contract can, for example, be written to stipulate the cost of shipping an item where the shipping charge
changes depending on how quickly the item arrives. With the terms agreed to by both parties and written to the ledger,
the appropriate funds change hands automatically when the item is received.
Consensus
The process of keeping the ledger transactions synchronized across the network — to ensure that ledgers update only
when transactions are approved by the appropriate participants, and that when ledgers do update, they update with the
same transactions in the same order — is called consensus.
4.1. Introduction 15
hyperledger-fabricdocs Documentation, Release master
You’ll learn a lot more about ledgers, smart contracts and consensus later. For now, it’s enough to think of a blockchain
as a shared, replicated transaction system which is updated via smart contracts and kept consistently synchronized
through a collaborative process called consensus.
Modern technology has taken this process from stone tablets and paper folders to hard drives and cloud platforms, but
the underlying structure is the same. Unified systems for managing the identity of network participants do not exist,
establishing provenance is so laborious it takes days to clear securities transactions (the world volume of which is
numbered in the many trillions of dollars), contracts must be signed and executed manually, and every database in the
system contains unique information and therefore represents a single point of failure.
It’s impossible with today’s fractured approach to information and process sharing to build a system of record that
spans a business network, even though the needs of visibility and trust are clear.
The Blockchain Difference
What if, instead of the rat’s nest of inefficiencies represented by the “modern” system of transactions, business net-
works had standard methods for establishing identity on the network, executing transactions, and storing data? What
if establishing the provenance of an asset could be determined by looking through a list of transactions that, once
written, cannot be changed, and can therefore be trusted?
That business network would look more like this:
4.1. Introduction 17
hyperledger-fabricdocs Documentation, Release master
This is a blockchain network, wherein every participant has their own replicated copy of the ledger. In addition to
ledger information being shared, the processes which update the ledger are also shared. Unlike today’s systems, where
a participant’s private programs are used to update their private ledgers, a blockchain system has shared programs
to update shared ledgers.
With the ability to coordinate their business network through a shared ledger, blockchain networks can reduce the
time, cost, and risk associated with private information and processing while improving trust and visibility.
You now know what blockchain is and why it’s useful. There are a lot of other details that are important, but they all
relate to these fundamental ideas of the sharing of information and processes.
The Linux Foundation founded the Hyperledger project in 2015 to advance cross-industry blockchain technologies.
Rather than declaring a single blockchain standard, it encourages a collaborative approach to developing blockchain
technologies via a community process, with intellectual property rights that encourage open development and the
adoption of key standards over time.
Hyperledger Fabric is one of the blockchain projects within Hyperledger. Like other blockchain technologies, it has a
ledger, uses smart contracts, and is a system by which participants manage their transactions.
Where Hyperledger Fabric breaks from some other blockchain systems is that it is private and permissioned. Rather
than an open permissionless system that allows unknown identities to participate in the network (requiring protocols
like “proof of work” to validate transactions and secure the network), the members of a Hyperledger Fabric network
enroll through a trusted Membership Service Provider (MSP).
Hyperledger Fabric also offers several pluggable options. Ledger data can be stored in multiple formats, consensus
mechanisms can be swapped in and out, and different MSPs are supported.
Hyperledger Fabric also offers the ability to create channels, allowing a group of participants to create a separate ledger
of transactions. This is an especially important option for networks where some participants might be competitors and
not want every transaction they make — a special price they’re offering to some participants and not others, for
example — known to every participant. If two participants form a channel, then those participants — and no others —
have copies of the ledger for that channel.
Shared Ledger
Hyperledger Fabric has a ledger subsystem comprising two components: the world state and the transaction log.
Each participant has a copy of the ledger to every Hyperledger Fabric network they belong to.
The world state component describes the state of the ledger at a given point in time. It’s the database of the ledger. The
transaction log component records all transactions which have resulted in the current value of the world state; it’s the
update history for the world state. The ledger, then, is a combination of the world state database and the transaction
log history.
The ledger has a replaceable data store for the world state. By default, this is a LevelDB key-value store database.
The transaction log does not need to be pluggable. It simply records the before and after values of the ledger database
being used by the blockchain network.
Smart Contracts
Hyperledger Fabric smart contracts are written in chaincode and are invoked by an application external to the
blockchain when that application needs to interact with the ledger. In most cases, chaincode interacts only with
the database component of the ledger, the world state (querying it, for example), and not the transaction log.
Chaincode can be implemented in several programming languages. Currently, Go and Node are supported.
Privacy
Depending on the needs of a network, participants in a Business-to-Business (B2B) network might be extremely
sensitive about how much information they share. For other networks, privacy will not be a top concern.
Hyperledger Fabric supports networks where privacy (using channels) is a key operational requirement as well as
networks that are comparatively open.
Consensus
Transactions must be written to the ledger in the order in which they occur, even though they might be between
different sets of participants within the network. For this to happen, the order of transactions must be established and a
method for rejecting bad transactions that have been inserted into the ledger in error (or maliciously) must be put into
place.
This is a thoroughly researched area of computer science, and there are many ways to achieve it, each with different
trade-offs. For example, PBFT (Practical Byzantine Fault Tolerance) can provide a mechanism for file replicas to
communicate with each other to keep each copy consistent, even in the event of corruption. Alternatively, in Bitcoin,
ordering happens through a process called mining where competing computers race to solve a cryptographic puzzle
which defines the order that all processes subsequently build upon.
Hyperledger Fabric has been designed to allow network starters to choose a consensus mechanism that best represents
the relationships that exist between participants. As with privacy, there is a spectrum of needs; from networks that are
highly structured in their relationships to those that are more peer-to-peer.
We’ll learn more about the Hyperledger Fabric consensus mechanisms, which currently include SOLO and Kafka.
4.1. Introduction 19
hyperledger-fabricdocs Documentation, Release master
Hyperledger Fabric is an implementation of distributed ledger technology (DLT) that delivers enterprise-ready net-
work security, scalability, confidentiality and performance, in a modular blockchain architecture. Hyperledger Fabric
delivers the following blockchain network functionalities:
To enable permissioned networks, Hyperledger Fabric provides a membership identity service that manages user IDs
and authenticates all participants on the network. Access control lists can be used to provide additional layers of
permission through authorization of specific network operations. For example, a specific user ID could be permitted
to invoke a chaincode application, but be blocked from deploying new chaincode.
Hyperledger Fabric enables competing business interests, and any groups that require private, confidential transac-
tions, to coexist on the same permissioned network. Private channels are restricted messaging paths that can be used
to provide transaction privacy and confidentiality for specific subsets of network members. All data, including trans-
action, member and channel information, on a channel are invisible and inaccessible to any network members not
explicitly granted access to that channel.
Hyperledger Fabric assigns network roles by node type. To provide concurrency and parallelism to the network,
transaction execution is separated from transaction ordering and commitment. Executing transactions prior to ordering
them enables each peer node to process multiple transactions simultaneously. This concurrent execution increases
processing efficiency on each peer and accelerates delivery of transactions to the ordering service.
In addition to enabling parallel processing, the division of labor unburdens ordering nodes from the demands of
transaction execution and ledger maintenance, while peer nodes are freed from ordering (consensus) workloads. This
bifurcation of roles also limits the processing required for authorization and authentication; all peer nodes do not have
to trust all ordering nodes, and vice versa, so processes on one can run independently of verification by the other.
Chaincode applications encode logic that is invoked by specific types of transactions on the channel. Chaincode that
defines parameters for a change of asset ownership, for example, ensures that all transactions that transfer ownership
are subject to the same rules and requirements. System chaincode is distinguished as chaincode that defines operating
parameters for the entire channel. Lifecycle and configuration system chaincode defines the rules for the channel;
endorsement and validation system chaincode defines the requirements for endorsing and validating transactions.
Hyperledger Fabric implements a modular architecture to provide functional choice to network designers. Specific
algorithms for identity, ordering (consensus) and encryption, for example, can be plugged in to any Hyperledger
Fabric network. The result is a universal blockchain architecture that any industry or public domain can adopt, with
the assurance that its networks will be interoperable across market, regulatory and geographic boundaries.
This section outlines the key design features woven into Hyperledger Fabric that fulfill its promise of a comprehensive,
yet customizable, enterprise blockchain solution:
• Assets — Asset definitions enable the exchange of almost anything with monetary value over the network, from
whole foods to antique cars to currency futures.
• Chaincode — Chaincode execution is partitioned from transaction ordering, limiting the required levels of trust
and verification across node types, and optimizing network scalability and performance.
• Ledger Features — The immutable, shared ledger encodes the entire transaction history for each channel, and
includes SQL-like query capability for efficient auditing and dispute resolution.
• Privacy — Channels and private data collections enable private and confidential multi-lateral transactions that
are usually required by competing businesses and regulated industries that exchange assets on a common net-
work.
• Security & Membership Services — Permissioned membership provides a trusted blockchain network, where
participants know that all transactions can be detected and traced by authorized regulators and auditors.
• Consensus — A unique approach to consensus enables the flexibility and scalability needed for the enterprise.
4.3.1 Assets
Assets can range from the tangible (real estate and hardware) to the intangible (contracts and intellectual property).
Hyperledger Fabric provides the ability to modify assets using chaincode transactions.
Assets are represented in Hyperledger Fabric as a collection of key-value pairs, with state changes recorded as trans-
actions on a Channel ledger. Assets can be represented in binary and/or JSON form.
You can easily define and use assets in your Hyperledger Fabric applications using the Hyperledger Composer tool.
4.3.2 Chaincode
Chaincode is software defining an asset or assets, and the transaction instructions for modifying the asset(s); in other
words, it’s the business logic. Chaincode enforces the rules for reading or altering key-value pairs or other state
database information. Chaincode functions execute against the ledger’s current state database and are initiated through
a transaction proposal. Chaincode execution results in a set of key-value writes (write set) that can be submitted to the
network and applied to the ledger on all peers.
The ledger is the sequenced, tamper-resistant record of all state transitions in the fabric. State transitions are a result
of chaincode invocations (‘transactions’) submitted by participating parties. Each transaction results in a set of asset
key-value pairs that are committed to the ledger as creates, updates, or deletes.
The ledger is comprised of a blockchain (‘chain’) to store the immutable, sequenced record in blocks, as well as a state
database to maintain current fabric state. There is one ledger per channel. Each peer maintains a copy of the ledger
for each channel of which they are a member.
Some features of a Fabric ledger:
• Query and update ledger using key-based lookups, range queries, and composite key queries
• Read-only queries using a rich query language (if using CouchDB as state database)
• Read-only history queries — Query ledger history for a key, enabling data provenance scenarios
• Transactions consist of the versions of keys/values that were read in chaincode (read set) and keys/values that
were written in chaincode (write set)
• Transactions contain signatures of every endorsing peer and are submitted to ordering service
• Transactions are ordered into blocks and are “delivered” from an ordering service to peers on a channel
• Peers validate transactions against endorsement policies and enforce the policies
• Prior to appending a block, a versioning check is performed to ensure that states for assets that were read have
not changed since chaincode execution time
• There is immutability once a transaction is validated and committed
• A channel’s ledger contains a configuration block defining policies, access control lists, and other pertinent
information
• Channels contain Membership Service Provider instances allowing for crypto materials to be derived from
different certificate authorities
See the ledger topic for a deeper dive on the databases, storage structure, and “query-ability.”
4.3.4 Privacy
Hyperledger Fabric employs an immutable ledger on a per-channel basis, as well as chaincode that can manipulate
and modify the current state of assets (i.e. update key-value pairs). A ledger exists in the scope of a channel — it can
be shared across the entire network (assuming every participant is operating on one common channel) — or it can be
privatized to include only a specific set of participants.
In the latter scenario, these participants would create a separate channel and thereby isolate/segregate their transactions
and ledger. In order to solve scenarios that want to bridge the gap between total transparency and privacy, chaincode
can be installed only on peers that need to access the asset states to perform reads and writes (in other words, if a
chaincode is not installed on a peer, it will not be able to properly interface with the ledger).
When a subset of organizations on that channel need to keep their transaction data confidential, a private data collection
(collection) is used to segregate this data in a private database, logically separate from the channel ledger, accessible
only to the authorized subset of organizations.
Thus, channels keep transactions private from the broader network whereas collections keep data private between
subsets of organizations on the channel.
To further obfuscate the data, values within chaincode can be encrypted (in part or in total) using common cryp-
tographic algorithms such as AES before sending transactions to the ordering service and appending blocks to the
ledger. Once encrypted data has been written to the ledger, it can be decrypted only by a user in possession of the
corresponding key that was used to generate the cipher text. For further details on chaincode encryption, see the
Chaincode for Developers topic.
See the Private Data topic for more details on how to achieve privacy on your blockchain network.
Hyperledger Fabric underpins a transactional network where all participants have known identities. Public Key Infras-
tructure is used to generate cryptographic certificates which are tied to organizations, network components, and end
users or client applications. As a result, data access control can be manipulated and governed on the broader network
and on channel levels. This “permissioned” notion of Hyperledger Fabric, coupled with the existence and capabilities
of channels, helps address scenarios where privacy and confidentiality are paramount concerns.
See the Membership Service Providers (MSP) topic to better understand cryptographic implementations, and the sign,
verify, authenticate approach used in Hyperledger Fabric.
4.3.6 Consensus
In distributed ledger technology, consensus has recently become synonymous with a specific algorithm, within a
single function. However, consensus encompasses more than simply agreeing upon the order of transactions, and
this differentiation is highlighted in Hyperledger Fabric through its fundamental role in the entire transaction flow,
from proposal and endorsement, to ordering, validation and commitment. In a nutshell, consensus is defined as the
full-circle verification of the correctness of a set of transactions comprising a block.
Consensus is achieved ultimately when the order and results of a block’s transactions have met the explicit policy
criteria checks. These checks and balances take place during the lifecycle of a transaction, and include the usage of
endorsement policies to dictate which specific members must endorse a certain transaction class, as well as system
chaincodes to ensure that these policies are enforced and upheld. Prior to commitment, the peers will employ these
system chaincodes to make sure that enough endorsements are present, and that they were derived from the appropriate
entities. Moreover, a versioning check will take place during which the current state of the ledger is agreed or consented
upon, before any blocks containing transactions are appended to the ledger. This final check provides protection against
double spend operations and other threats that might compromise data integrity, and allows for functions to be executed
against non-static variables.
In addition to the multitude of endorsement, validity and versioning checks that take place, there are also ongoing
identity verifications happening in all directions of the transaction flow. Access control lists are implemented on
hierarchical layers of the network (ordering service down to channels), and payloads are repeatedly signed, verified and
authenticated as a transaction proposal passes through the different architectural components. To conclude, consensus
is not merely limited to the agreed upon order of a batch of transactions; rather, it is an overarching characterization
that is achieved as a byproduct of the ongoing verifications that take place during a transaction’s journey from proposal
to commitment.
Check out the Transaction Flow diagram for a visual representation of consensus.
This topic will describe, at a conceptual level, how Hyperledger Fabric allows organizations to collaborate in the
formation of blockchain networks. If you’re an architect, administrator or developer, you can use this topic to get a
solid understanding of the major structure and process components in a Hyperledger Fabric blockchain network. This
topic will use a manageable worked example that introduces all of the major components in a blockchain network.
After understanding this example you can read more detailed information about these components elsewhere in the
documentation, or try building a sample network.
After reading this topic and understanding the concept of policies, you will have a solid understanding of the decisions
that organizations need to make to establish the policies that control a deployed Hyperledger Fabric network. You’ll
also understand how organizations manage network evolution using declarative policies – a key feature of Hyperledger
Fabric. In a nutshell, you’ll understand the major technical components of Hyperledger Fabric and the decisions
organizations need to make about them.
A blockchain network is a technical infrastructure that provides ledger and smart contract (chaincode) services to
applications. Primarily, smart contracts are used to generate transactions which are subsequently distributed to every
peer node in the network where they are immutably recorded on their copy of the ledger. The users of applications
might be end users using client applications or blockchain network administrators.
In most cases, multiple organizations come together as a consortium to form the network and their permissions are
determined by a set of policies that are agreed by the consortium when the network is originally configured. Moreover,
network policies can change over time subject to the agreement of the organizations in the consortium, as we’ll discover
when we discuss the concept of modification policy.
Before we start, let’s show you what we’re aiming at! Here’s a diagram representing the final state of our sample
network.
Don’t worry that this might look complicated! As we go through this topic, we will build up the network piece by
piece, so that you see how the organizations R1, R2, R3 and R4 contribute infrastructure to the network to help form
it. This infrastructure implements the blockchain network, and it is governed by policies agreed by the organizations
who form the network – for example, who can add new organizations. You’ll discover how applications consume the
ledger and smart contract services provided by the blockchain network.
Four organizations, R1, R2, R3 and R4 have jointly decided, and written into an agreement, that they will set up and
exploit a Hyperledger Fabric network. R4 has been assigned to be the network initiator – it has been given the power
to set up the initial version of the network. R4 has no intention to perform business transactions on the network. R1
and R2 have a need for a private communications within the overall network, as do R2 and R3. Organization R1 has a
client application that can perform business transactions within channel C1. Organization R2 has a client application
that can do similar work both in channel C1 and C2. Organization R3 has a client application that can do this on
channel C2. Peer node P1 maintains a copy of the ledger L1 associated with C1. Peer node P2 maintains a copy of
the ledger L1 associated with C1 and a copy of ledger L2 associated with C2. Peer node P3 maintains a copy of the
ledger L2 associated with C2. The network is governed according to policy rules specified in network configuration
NC4, the network is under the control of organizations R1 and R4. Channel C1 is governed according to the policy
rules specified in channel configuration CC1; the channel is under the control of organizations R1 and R2. Channel
C2 is governed according to the policy rules specified in channel configuration CC2; the channel is under the control
of organizations R2 and R3. There is an ordering service O4 that services as a network administration point for N,
and uses the system channel. The ordering service also supports application channels C1 and C2, for the purposes of
transaction ordering into blocks for distribution. Each of the four organizations has a preferred Certificate Authority.
Let’s start at the beginning by creating the basis for the network:
The network is formed when an orderer is started. In our example network, N, the ordering service comprising a single
node, O4, is configured according to a network configuration NC4, which gives administrative rights to organization
R4. At the network level, Certificate Authority CA4 is used to dispense identities to the administrators and network
nodes of the R4 organization.
We can see that the first thing that defines a network, N, is an ordering service, O4. It’s helpful to think of the
ordering service as the initial administration point for the network. As agreed beforehand, O4 is initially configured
and started by an administrator in organization R4, and hosted in R4. The configuration NC4 contains the policies that
describe the starting set of administrative capabilities for the network. Initially this is set to only give R4 rights over
the network. This will change, as we’ll see later, but for now R4 is the only member of the network.
Certificate Authorities
You can also see a Certificate Authority, CA4, which is used to issue certificates to administrators and network nodes.
CA4 plays a key role in our network because it dispenses X.509 certificates that can be used to identify components
as belonging to organization R4. Certificates issued by CAs can also be used to sign transactions to indicate that an
organization endorses the transaction result – a precondition of it being accepted onto the ledger. Let’s examine these
two aspects of a CA in a little more detail.
Firstly, different components of the blockchain network use certificates to identify themselves to each other as being
from a particular organization. That’s why there is usually more than one CA supporting a blockchain network – dif-
ferent organizations often use different CAs. We’re going to use four CAs in our network; one of for each organization.
Indeed, CAs are so important that Hyperledger Fabric provides you with a built-in one (called Fabric-CA) to help you
get going, though in practice, organizations will choose to use their own CA.
The mapping of certificates to member organizations is achieved by via a structure called a Membership Services
Provider (MSP). Network configuration NC4 uses a named MSP to identify the properties of certificates dispensed by
CA4 which associate certificate holders with organization R4. NC4 can then use this MSP name in policies to grant
actors from R4 particular rights over network resources. An example of such a policy is to identify the administrators
in R4 who can add new member organizations to the network. We don’t show MSPs on these diagrams, as they would
just clutter them up, but they are very important.
Secondly, we’ll see later how certificates issued by CAs are at the heart of the transaction generation and validation
process. Specifically, X.509 certificates are used in client application transaction proposals and smart contract trans-
action responses to digitally sign transactions. Subsequently the network nodes who host copies of the ledger verify
that transaction signatures are valid before accepting transactions onto the ledger.
Let’s recap the basic structure of our example blockchain network. There’s a resource, the network N, accessed by a
set of users defined by a Certificate Authority CA4, who have a set of rights over the resources in the network N as
described by policies contained inside a network configuration NC4. All of this is made real when we configure and
start the ordering service node O4.
NC4 was initially configured to only allow R4 users administrative rights over the network. In this next phase, we are
going to allow organization R1 users to administer the network. Let’s see how the network evolves:
Organization R4 updates the network configuration to make organization R1 an administrator too. After this point R1
and R4 have equal rights over the network configuration.
We see the addition of a new organization R1 as an administrator – R1 and R4 now have equal rights over the net-
work. We can also see that certificate authority CA1 has been added – it can be used to identify users from the R1
organization. After this point, users from both R1 and R4 can administer the network.
Although the orderer node, O4, is running on R4’s infrastructure, R1 has shared administrative rights over it, as long
as it can gain network access. It means that R1 or R4 could update the network configuration NC4 to allow the R2
organization a subset of network operations. In this way, even though R4 is running the ordering service, and R1 has
full administrative rights over it, R2 has limited rights to create new consortia.
In its simplest form, the ordering service is a single node in the network, and that’s what you can see in the example.
Ordering services are usually multi-node, and can be configured to have different nodes in different organizations. For
example, we might run O4 in R4 and connect it to O2, a separate orderer node in organization R1. In this way, we
would have a multi-site, multi-organization administration structure.
We’ll discuss the ordering service a little more later in this topic, but for now just think of the ordering service as an
administration point which provides different organizations controlled access to the network.
Although the network can now be administered by R1 and R4, there is very little that can be done. The first thing we
need to do is define a consortium. This word literally means “a group with a shared destiny”, so it’s an appropriate
choice for a set of organizations in a blockchain network.
Let’s see how a consortium is defined:
A network administrator defines a consortium X1 that contains two members, the organizations R1 and R2. This
consortium definition is stored in the network configuration NC4, and will be used at the next stage of network devel-
opment. CA1 and CA2 are the respective Certificate Authorities for these organizations.
Because of the way NC4 is configured, only R1 or R4 can create new consortia. This diagram shows the addition of
a new consortium, X1, which defines R1 and R2 as its constituting organizations. We can also see that CA2 has been
added to identify users from R2. Note that a consortium can have any number of organizational members – we have
just shown two as it is the simplest configuration.
Why are consortia important? We can see that a consortium defines the set of organizations in the network who share
a need to transact with one another – in this case R1 and R2. It really makes sense to group organizations together if
they have a common goal, and that’s exactly what’s happening.
The network, although started by a single organization, is now controlled by a larger set of organizations. We could
have started it this way, with R1, R2 and R4 having shared control, but this build up makes it easier to understand.
We’re now going to use consortium X1 to create a really important part of a Hyperledger Fabric blockchain – a
channel.
So let’s create this key part of the Fabric blockchain network – a channel. A channel is a primary communications
mechanism by which the members of a consortium can communicate with each other. There can be multiple channels
in a network, but for now, we’ll start with one.
Let’s see how the first channel has been added to the network:
A channel C1 has been created for R1 and R2 using the consortium definition X1. The channel is governed by a
channel configuration CC1, completely separate to the network configuration. CC1 is managed by R1 and R2 who
have equal rights over C1. R4 has no rights in CC1 whatsoever.
The channel C1 provides a private communications mechanism for the consortium X1. We can see channel C1 has been
connected to the ordering service O4 but that nothing else is attached to it. In the next stage of network development,
we’re going to connect components such as client applications and peer nodes. But at this point, a channel represents
the potential for future connectivity.
Even though channel C1 is a part of the network N, it is quite distinguishable from it. Also notice that organizations
R3 and R4 are not in this channel – it is for transaction processing between R1 and R2. In the previous step, we saw
how R4 could grant R1 permission to create new consortia. It’s helpful to mention that R4 also allowed R1 to create
channels! In this diagram, it could have been organization R1 or R4 who created a channel C1. Again, note that a
channel can have any number of organizations connected to it – we’ve shown two as it’s the simplest configuration.
Again, notice how channel C1 has a completely separate configuration, CC1, to the network configuration NC4. CC1
contains the policies that govern the rights that R1 and R2 have over the channel C1 – and as we’ve seen, R3 and
R4 have no permissions in this channel. R3 and R4 can only interact with C1 if they are added by R1 or R2 to the
appropriate policy in the channel configuration CC1. An example is defining who can add a new organization to the
channel. Specifically, note that R4 cannot add itself to the channel C1 – it must, and can only, be authorized by R1 or
R2.
Why are channels so important? Channels are useful because they provide a mechanism for private communications
and private data between the members of a consortium. Channels provide privacy from other channels, and from the
network. Hyperledger Fabric is powerful in this regard, as it allows organizations to share infrastructure and keep it
private at the same time. There’s no contradiction here – different consortia within the network will have a need for
different information and processes to be appropriately shared, and channels provide an efficient mechanism to do this.
Channels provide an efficient sharing of infrastructure while maintaining data and communications privacy.
We can also see that once a channel has been created, it is in a very real sense “free from the network”. It is only
organizations that are explicitly specified in a channel configuration that have any control over it, from this time
forward into the future. Likewise, any updates to network configuration NC4 from this time onwards will have no direct
effect on channel configuration CC1; for example if consortia definition X1 is changed, it will not affect the members
of channel C1. Channels are therefore useful because they allow private communications between the organizations
constituting the channel. Moreover, the data in a channel is completely isolated from the rest of the network, including
other channels.
As an aside, there is also a special system channel defined for use by the ordering service. It behaves in exactly the
same way as a regular channel, which are sometimes called application channels for this reason. We don’t normally
need to worry about this channel, but we’ll discuss a little bit more about it later in this topic.
Let’s now start to use the channel to connect the blockchain network and the organizational components together. In
the next stage of network development, we can see that our network N has just acquired two new components, namely
a peer node P1 and a ledger instance, L1.
A peer node P1 has joined the channel C1. P1 physically hosts a copy of the ledger L1. P1 and O4 can communicate
with each other using channel C1.
Peer nodes are the network components where copies of the blockchain ledger are hosted! At last, we’re starting to
see some recognizable blockchain components! P1’s purpose in the network is purely to host a copy of the ledger L1
for others to access. We can think of L1 as being physically hosted on P1, but logically hosted on the channel C1.
We’ll see this idea more clearly when we add more peers to the channel.
A key part of a P1’s configuration is an X.509 identity issued by CA1 which associates P1 with organization R1. Once
P1 is started, it can join channel C1 using the orderer O4. When O4 receives this join request, it uses the channel
configuration CC1 to determine P1’s permissions on this channel. For example, CC1 determines whether P1 can read
and/or write information to the ledger L1.
Notice how peers are joined to channels by the organizations that own them, and though we’ve only added one peer,
we’ll see how there can be multiple peer nodes on multiple channels within the network. We’ll see the different roles
that peers can take on a little later.
Now that the channel C1 has a ledger on it, we can start connecting client applications to consume some of the services
provided by workhorse of the ledger, the peer!
Notice how the network has grown:
A smart contract S5 has been installed onto P1. Client application A1 in organization R1 can use S5 to access the
ledger via peer node P1. A1, P1 and O4 are all joined to channel C1, i.e. they can all make use of the communication
facilities provided by that channel.
In the next stage of network development, we can see that client application A1 can use channel C1 to connect to
specific network resources – in this case A1 can connect to both peer node P1 and orderer node O4. Again, see how
channels are central to the communication between network and organization components. Just like peers and orderers,
a client application will have an identity that associates it with an organization. In our example, client application A1
is associated with organization R1; and although it is outside the Fabric blockchain network, it is connected to it via
the channel C1.
It might now appear that A1 can access the ledger L1 directly via P1, but in fact, all access is managed via a special
program called a smart contract chaincode, S5. Think of S5 as defining all the common access patterns to the ledger;
S5 provides a well-defined set of ways by which the ledger L1 can be queried or updated. In short, client application
A1 has to go through smart contract S5 to get to ledger L1!
Smart contract chaincodes can be created by application developers in each organization to implement a business
process shared by the consortium members. Smart contracts are used to help generate transactions which can be
subsequently distributed to the every node in the network. We’ll discuss this idea a little later; it’ll be easier to
understand when the network is bigger. For now, the important thing to understand is that to get to this point two
operations must have been performed on the smart contract; it must have been installed, and then instantiated.
After a smart contract S5 has been developed, an administrator in organization R1 must install it onto peer node
P1. This is a straightforward operation; after it has occurred, P1 has full knowledge of S5. Specifically, P1 can see
the implementation logic of S5 – the program code that it uses to access the ledger L1. We contrast this to the S5
interface which merely describes the inputs and outputs of S5, without regard to its implementation.
When an organization has multiple peers in a channel, it can choose the peers upon which it installs smart contracts; it
does not need to install a smart contract on every peer.
However, just because P1 has installed S5, the other components connected to channel C1 are unaware of it; it must first
be instantiated on channel C1. In our example, which only has a single peer node P1, an administrator in organization
R1 must instantiate S5 on channel C1 using P1. After instantiation, every component on channel C1 is aware of the
existence of S5; and in our example it means that S5 can now be invoked by client application A1!
Note that although every component on the channel can now access S5, they are not able to see its program logic. This
remains private to those nodes who have installed it; in our example that means P1. Conceptually this means that it’s
the smart contract interface that is instantiated, in contrast to the smart contract implementation that is installed. To
reinforce this idea; installing a smart contract shows how we think of it being physically hosted on a peer, whereas
instantiating a smart contract shows how we consider it logically hosted by the channel.
Endorsement policy
The most important piece of additional information supplied at instantiation is an endorsement policy. It describes
which organizations must approve transactions before they will be accepted by other organizations onto their copy of
the ledger. In our sample network, transactions can be only be accepted onto ledger L1 if R1 or R2 endorse them.
The act of instantiation places the endorsement policy in channel configuration CC1; it enables it to be accessed by
any member of the channel. You can read more about endorsement policies in the transaction flow topic.
Once a smart contract has been installed on a peer node and instantiated on a channel it can be invoked by a client
application. Client applications do this by sending transaction proposals to peers owned by the organizations specified
by the smart contract endorsement policy. The transaction proposal serves as input to the smart contract, which uses it
to generate an endorsed transaction response, which is returned by the peer node to the client application.
It’s these transactions responses that are packaged together with the transaction proposal to form a fully endorsed
transaction, which can be distributed to the entire network. We’ll look at this in more detail later For now, it’s enough
to understand how applications invoke smart contracts to generate endorsed transactions.
By this stage in network development we can see that organization R1 is fully participating in the network. Its
applications – starting with A1 – can access the ledger L1 via smart contract S5, to generate transactions that will be
endorsed by R1, and therefore accepted onto the ledger because they conform to the endorsement policy.
Recall that our objective was to create a channel for consortium X1 – organizations R1 and R2. This next phase of
network development sees organization R2 add its infrastructure to the network.
Let’s see how the network has evolved:
The network has grown through the addition of infrastructure from organization R2. Specifically, R2 has added peer
node P2, which hosts a copy of ledger L1, and chaincode S5. P2 has also joined channel C1, as has application A2.
A2 and P2 are identified using certificates from CA2. All of this means that both applications A1 and A2 can invoke
S5 on C1 either using peer node P1 or P2.
We can see that organization R2 has added a peer node, P2, on channel C1. P2 also hosts a copy of the ledger L1
and smart contract S5. We can see that R2 has also added client application A2 which can connect to the network via
channel C1. To achieve this, an administrator in organization R2 has created peer node P2 and joined it to channel C1,
in the same way as an administrator in R1.
We have created our first operational network! At this stage in network development, we have a channel in which
organizations R1 and R2 can fully transact with each other. Specifically, this means that applications A1 and A2 can
generate transactions using smart contract S5 and ledger L1 on channel C1.
In contrast to peer nodes, which always host a copy of the ledger, we see that there are two different kinds of peer
nodes; those which host smart contracts and those which do not. In our network, every peer hosts a copy of the smart
contract, but in larger networks, there will be many more peer nodes that do not host a copy of the smart contract. A
peer can only run a smart contract if it is installed on it, but it can know about the interface of a smart contract by being
connected to a channel.
You should not think of peer nodes which do not have smart contracts installed as being somehow inferior. It’s more
the case that peer nodes with smart contracts have a special power – to help generate transactions. Note that all peer
nodes can validate and subsequently accept or reject transactions onto their copy of the ledger L1. However, only
peer nodes with a smart contract installed can take part in the process of transaction endorsement which is central to
the generation of valid transactions.
We don’t need to worry about the exact details of how transactions are generated, distributed and accepted in this topic
– it is sufficient to understand that we have a blockchain network where organizations R1 and R2 can share information
and processes as ledger-captured transactions. We’ll learn a lot more about transactions, ledgers, smart contracts in
other topics.
Types of peers
In Hyperledger Fabric, while all peers are the same, they can assume multiple roles depending on how the network is
configured. We now have enough understanding of a typical network topology to describe these roles.
• Committing peer. Every peer node in a channel is a committing peer. It receives blocks of generated transactions,
which are subsequently validated before they are committed to the peer node’s copy of the ledger as an append
operation.
• Endorsing peer. Every peer with a smart contract can be an endorsing peer if it has a smart contract installed.
However, to actually be an endorsing peer, the smart contract on the peer must be used by a client application to
generate a digitally signed transaction response. The term endorsing peer is an explicit reference to this fact.
An endorsement policy for a smart contract identifies the organizations whose peer should digitally sign a
generated transaction before it can be accepted onto a committing peer’s copy of the ledger.
These are the two major types of peer; there are two other roles a peer can adopt:
• Leader peer. When an organization has multiple peers in a channel, a leader peer is a node which takes respon-
sibility for distributing transactions from the orderer to the other committing peers in the organization. A peer
can choose to participate in static or dynamic leadership selection.
It is helpful, therefore to think of two sets of peers from leadership perspective – those that have static leader
selection, and those with dynamic leader selection. For the static set, zero or more peers can be configured as
leaders. For the dynamic set, one peer will be elected leader by the set. Moreover, in the dynamic set, if a leader
peer fails, then the remaining peers will re-elect a leader.
It means that an organization’s peers can have one or more leaders connected to the ordering service. This can
help to improve resilience and scalability in large networks which process high volumes of transactions.
• Anchor peer. If a peer needs to communicate with a peer in another organization, then it can use one of the
anchor peers defined in the channel configuration for that organization. An organization can have zero or more
anchor peers defined for it, and an anchor peer can help with many different cross-organization communication
scenarios.
Note that a peer can be a committing peer, endorsing peer, leader peer and anchor peer all at the same time! Only the
anchor peer is optional – for all practical purposes there will always be a leader peer and at least one endorsing peer
and at least one committing peer.
In a similar way to organization R1, organization R2 must install smart contract S5 onto its peer node, P2. That’s
obvious – if applications A1 or A2 wish to use S5 on peer node P2 to generate transactions, it must first be present;
installation is the mechanism by which this happens. At this point, peer node P2 has a physical copy of the smart
contract and the ledger; like P1, it can both generate and accept transactions onto its copy of ledger L1.
However, in contrast to organization R1, organization R2 does not need to instantiate smart contract S5 on channel
C1. That’s because S5 has already been instantiated on the channel by organization R1. Instantiation only needs to
happen once; any peer which subsequently joins the channel knows that smart contract S5 is available to the channel.
This fact reflects the fact that ledger L1 and smart contract really exist in a physical manner on the peer nodes, and a
logical manner on the channel; R2 is merely adding another physical instance of L1 and S5 to the network.
In our network, we can see that channel C1 connects two client applications, two peer nodes and an ordering service.
Since there is only one channel, there is only one logical ledger with which these components interact. Peer nodes P1
and P2 have identical copies of ledger L1. Copies of smart contract S5 will usually be identically implemented using
the same programming language, but if not, they must be semantically equivalent.
We can see that the careful addition of peers to the network can help support increased throughput, stability, and
resilience. For example, more peers in a network will allow more applications to connect to it; and multiple peers in
an organization will provide extra resilience in the case of planned or unplanned outages.
It all means that it is possible to configure sophisticated topologies which support a variety of operational goals – there
is no theoretical limit to how big a network can get. Moreover, the technical mechanism by which peers within an
individual organization efficiently discover and communicate with each other – the gossip protocol – will accommodate
a large number of peer nodes in support of such topologies.
The careful use of network and channel policies allow even large networks to be well-governed. Organizations are free
to add peer nodes to the network so long as they conform to the policies agreed by the network. Network and channel
policies create the balance between autonomy and control which characterizes a de-centralized network.
We’re now going to simplify the visual vocabulary used to represent our sample blockchain network. As the size of
the network grows, the lines initially used to help us understand channels will become cumbersome. Imagine how
complicated our diagram would be if we added another peer or client application, or another channel?
That’s what we’re going to do in a minute, so before we do, let’s simplify the visual vocabulary. Here’s a simplified
representation of the network we’ve developed so far:
The diagram shows the facts relating to channel C1 in the network N as follows: Client applications A1 and A2
can use channel C1 for communication with peers P1 and P2, and orderer O4. Peer nodes P1 and P2 can use the
communication services of channel C1. Ordering service O4 can make use of the communication services of channel
C1. Channel configuration CC1 applies to channel C1.
Note that the network diagram has been simplified by replacing channel lines with connection points, shown as blue
circles which include the channel number. No information has been lost. This representation is more scalable be-
cause it eliminates crossing lines. This allows us to more clearly represent larger networks. We’ve achieved this
simplification by focusing on the connection points between components and a channel, rather than the channel itself.
In this next phase of network development, we introduce organization R3. We’re going to give organizations R2 and
R3 a separate application channel which allows them to transact with each other. This application channel will be
completely separate to that previously defined, so that R2 and R3 transactions can be kept private to them.
Let’s return to the network level and define a new consortium, X2, for R2 and R3:
A network administrator from organization R1 or R4 has added a new consortium definition, X2, which includes
organizations R2 and R3. This will be used to define a new channel for X2.
Notice that the network now has two consortia defined: X1 for organizations R1 and R2 and X2 for organizations R2
and R3. Consortium X2 has been introduced in order to be able to create a new channel for R2 and R3.
A new channel can only be created by those organizations specifically identified in the network configuration policy,
NC4, as having the appropriate rights to do so, i.e. R1 or R4. This is an example of a policy which separates
organizations that can manage resources at the network level versus those who can manage resources at the channel
level. Seeing these policies at work helps us understand why Hyperledger Fabric has a sophisticated tiered policy
structure.
In practice, consortium definition X2 has been added to the network configuration NC4. We discuss the exact mechan-
ics of this operation elsewhere in the documentation.
Let’s now use this new consortium definition, X2, to create a new channel, C2. To help reinforce your understanding of
the simpler channel notation, we’ve used both visual styles – channel C1 is represented with blue circular end points,
whereas channel C2 is represented with red connecting lines:
A new channel C2 has been created for R2 and R3 using consortium definition X2. The channel has a channel con-
figuration CC2, completely separate to the network configuration NC4, and the channel configuration CC1. Channel
C2 is managed by R2 and R3 who have equal rights over C2 as defined by a policy in CC2. R1 and R4 have no rights
defined in CC2 whatsoever.
The channel C2 provides a private communications mechanism for the consortium X2. Again, notice how organiza-
tions united in a consortium are what form channels. The channel configuration CC2 now contains the policies that
govern channel resources, assigning management rights to organizations R2 and R3 over channel C2. It is managed
exclusively by R2 and R3; R1 and R4 have no power in channel C2. For example, channel configuration CC2 can
subsequently be updated to add organizations to support network growth, but this can only be done by R2 or R3.
Note how the channel configurations CC1 and CC2 remain completely separate from each other, and completely
separate from the network configuration, NC4. Again we’re seeing the de-centralized nature of a Hyperledger Fabric
network; once channel C2 has been created, it is managed by organizations R2 and R3 independently to other network
elements. Channel policies always remain separate from each other and can only be changed by the organizations
authorized to do so in the channel.
As the network and channels evolve, so will the network and channel configurations. There is a process by which
this is accomplished in a controlled manner – involving configuration transactions which capture the change to these
configurations. Every configuration change results in a new configuration block transaction being generated, and
later in this topic, we’ll see how these blocks are validated and accepted to create updated network and channel
configurations respectively.
Throughout our sample network, we see the importance of network and channel configurations. These configurations
are important because they encapsulate the policies agreed by the network members, which provide a shared reference
for controlling access to network resources. Network and channel configurations also contain facts about the network
and channel composition, such as the name of consortia and its organizations.
For example, when the network is first formed using the ordering service node O4, its behaviour is governed by the
network configuration NC4. The initial configuration of NC4 only contains policies that permit organization R4 to
manage network resources. NC4 is subsequently updated to also allow R1 to manage network resources. Once this
change is made, any administrator from organization R1 or R4 that connects to O4 will have network management
rights because that is what the policy in the network configuration NC4 permits. Internally, each node in the ordering
service records each channel in the network configuration, so that there is a record of each channel created, at the
network level.
It means that although ordering service node O4 is the actor that created consortia X1 and X2 and channels C1 and
C2, the intelligence of the network is contained in the network configuration NC4 that O4 is obeying. As long as O4
behaves as a good actor, and correctly implements the policies defined in NC4 whenever it is dealing with network
resources, our network will behave as all organizations have agreed. In many ways NC4 can be considered more
important than O4 because, ultimately, it controls network access.
The same principles apply for channel configurations with respect to peers. In our network, P1 and P2 are likewise
good actors. When peer nodes P1 and P2 are interacting with client applications A1 or A2 they are each using the
policies defined within channel configuration CC1 to control access to the channel C1 resources.
For example, if A1 wants to access the smart contract chaincode S5 on peer nodes P1 or P2, each peer node uses its
copy of CC1 to determine the operations that A1 can perform. For example, A1 may be permitted to read or write data
from the ledger L1 according to policies defined in CC1. We’ll see later the same pattern for actors in channel and its
channel configuration CC2. Again, we can see that while the peers and applications are critical actors in the network,
their behaviour in a channel is dictated more by the channel configuration policy than any other factor.
Finally, it is helpful to understand how network and channel configurations are physically realized. We can see that
network and channel configurations are logically singular – there is one for the network, and one for each channel.
This is important; every component that accesses the network or the channel must have a shared understanding of the
permissions granted to different organizations.
Even though there is logically a single configuration, it is actually replicated and kept consistent by every node that
forms the network or channel. For example, in our network peer nodes P1 and P2 both have a copy of channel
configuration CC1, and by the time the network is fully complete, peer nodes P2 and P3 will both have a copy of
channel configuration CC2. Similarly ordering service node O4 has a copy of the network configuration, but in a
multi-node configuration, every ordering service node will have its own copy of the network configuration.
Both network and channel configurations are kept consistent using the same blockchain technology that is used for
user transactions – but for configuration transactions. To change a network or channel configuration, an administrator
must submit a configuration transaction to change the network or channel configuration. It must be signed by the
organizations identified in the appropriate policy as being responsible for configuration change. This policy is called
the mod_policy and we’ll discuss it later.
Indeed, the ordering service nodes operate a mini-blockchain, connected via the system channel we mentioned earlier.
Using the system channel ordering service nodes distribute network configuration transactions. These transactions are
used to co-operatively maintain a consistent copy of the network configuration at each ordering service node. In a
similar way, peer nodes in an application channel can distribute channel configuration transactions. Likewise, these
transactions are used to maintain a consistent copy of the channel configuration at each peer node.
This balance between objects that are logically singular, by being physically distributed is a common pattern in Hy-
perledger Fabric. Objects like network configurations, that are logically single, turn out to be physically replicated
among a set of ordering services nodes for example. We also see it with channel configurations, ledgers, and to
some extent smart contracts which are installed in multiple places but whose interfaces exist logically at the channel
level. It’s a pattern you see repeated time and again in Hyperledger Fabric, and enables Hyperledger Fabric to be both
de-centralized and yet manageable at the same time.
Now that organization R3 is able to fully participate in channel C2, let’s add its infrastructure components to the
channel. Rather than do this one component at a time, we’re going to add a peer, its local copy of a ledger, a smart
contract and a client application all at once!
Let’s see the network with organization R3’s components added:
The diagram shows the facts relating to channels C1 and C2 in the network N as follows: Client applications A1 and
A2 can use channel C1 for communication with peers P1 and P2, and ordering service O4; client applications A3 can
use channel C2 for communication with peer P3 and ordering service O4. Ordering service O4 can make use of the
communication services of channels C1 and C2. Channel configuration CC1 applies to channel C1, CC2 applies to
channel C2.
First of all, notice that because peer node P3 is connected to channel C2, it has a different ledger – L2 – to those peer
nodes using channel C1. The ledger L2 is effectively scoped to channel C2. The ledger L1 is completely separate;
it is scoped to channel C1. This makes sense – the purpose of the channel C2 is to provide private communications
between the members of the consortium X2, and the ledger L2 is the private store for their transactions.
In a similar way, the smart contract S6, installed on peer node P3, and instantiated on channel C2, is used to provide
controlled access to ledger L2. Application A3 can now use channel C2 to invoke the services provided by smart
contract S6 to generate transactions that can be accepted onto every copy of the ledger L2 in the network.
At this point in time, we have a single network that has two completely separate channels defined within it. These
channels provide independently managed facilities for organizations to transact with each other. Again, this is de-
centralization at work; we have a balance between control and autonomy. This is achieved through policies which are
applied to channels which are controlled by, and affect, different organizations.
In this final stage of network development, let’s return our focus to organization R2. We can exploit the fact that R2 is
a member of both consortia X1 and X2 by joining it to multiple channels:
The diagram shows the facts relating to channels C1 and C2 in the network N as follows: Client applications A1 can
use channel C1 for communication with peers P1 and P2, and ordering service O4; client application A2 can use
channel C1 for communication with peers P1 and P2 and channel C2 for communication with peers P2 and P3 and
ordering service O4; client application A3 can use channel C2 for communication with peer P3 and P2 and ordering
service O4. Ordering service O4 can make use of the communication services of channels C1 and C2. Channel
configuration CC1 applies to channel C1, CC2 applies to channel C2.
We can see that R2 is a special organization in the network, because it is the only organization that is a member of
two application channels! It is able to transact with organization R1 on channel C1, while at the same time it can also
transact with organization R3 on a different channel, C2.
Notice how peer node P2 has smart contract S5 installed for channel C1 and smart contract S6 installed for channel
C2. Peer node P2 is a full member of both channels at the same time via different smart contracts for different ledgers.
This is a very powerful concept – channels provide both a mechanism for the separation of organizations, and a
mechanism for collaboration between organizations. All the while, this infrastructure is provided by, and shared
between, a set of independent organizations.
It is also important to note that peer node P2’s behaviour is controlled very differently depending upon the channel
in which it is transacting. Specifically, the policies contained in channel configuration CC1 dictate the operations
available to P2 when it is transacting in channel C1, whereas it is the policies in channel configuration CC2 that
control P2’s behaviour in channel C2.
Again, this is desirable – R2 and R1 agreed the rules for channel C1, whereas R2 and R3 agreed the rules for channel
C2. These rules were captured in the respective channel policies – they can and must be used by every component in
a channel to enforce correct behaviour, as agreed.
Similarly, we can see that client application A2 is now able to transact on channels C1 and C2. And likewise, it too
will be governed by the policies in the appropriate channel configurations. As an aside, note that client application
A2 and peer node P2 are using a mixed visual vocabulary – both lines and connections. You can see that they are
equivalent; they are visual synonyms.
The observant reader may notice that the ordering service node appears to be a centralized component; it was used to
create the network initially, and connects to every channel in the network. Even though we added R1 and R4 to the
network configuration policy NC4 which controls the orderer, the node was running on R4’s infrastructure. In a world
of de-centralization, this looks wrong!
Don’t worry! Our example network showed the simplest ordering service configuration to help you understand the idea
of a network administration point. In fact, the ordering service can itself too be completely de-centralized! We men-
tioned earlier that an ordering service could be comprised of many individual nodes owned by different organizations,
so let’s see how that would be done in our sample network.
Let’s have a look at a more realistic ordering service node configuration:
A multi-organization ordering service. The ordering service comprises ordering service nodes O1 and O4. O1 is
provided by organization R1 and node O4 is provided by organization R4. The network configuration NC4 defines
network resource permissions for actors from both organizations R1 and R4.
We can see that this ordering service completely de-centralized – it runs in organization R1 and it runs in organization
R4. The network configuration policy, NC4, permits R1 and R4 equal rights over network resources. Client applica-
tions and peer nodes from organizations R1 and R4 can manage network resources by connecting to either node O1
or node O4, because both nodes behave the same way, as defined by the policies in network configuration NC4. In
practice, actors from a particular organization tend to use infrastructure provided by their home organization, but that’s
certainly not always the case.
As well as being the management point for the network, the ordering service also provides another key facility – it
is the distribution point for transactions. The ordering service is the component which gathers endorsed transactions
from applications and orders them into transaction blocks, which are subsequently distributed to every peer node in
the channel. At each of these committing peers, transactions are recorded, whether valid or invalid, and their local
copy of the ledger updated appropriately.
Notice how the ordering service node O4 performs a very different role for the channel C1 than it does for the network
N. When acting at the channel level, O4’s role is to gather transactions and distribute blocks inside channel C1. It
does this according to the policies defined in channel configuration CC1. In contrast, when acting at the network
level, O4’s role is to provide a management point for network resources according to the policies defined in network
configuration NC4. Notice again how these roles are defined by different policies within the channel and network
configurations respectively. This should reinforce to you the importance of declarative policy based configuration in
Hyperledger Fabric. Policies both define, and are used to control, the agreed behaviours by each and every member of
a consortium.
We can see that the ordering service, like the other components in Hyperledger Fabric, is a fully de-centralized com-
ponent. Whether acting as a network management point, or as a distributor of blocks in a channel, its nodes can be
Changing policy
Throughout our exploration of the sample network, we’ve seen the importance of the policies to control the behaviour
of the actors in the system. We’ve only discussed a few of the available policies, but there are many that can be
declaratively defined to control every aspect of behaviour. These individual policies are discussed elsewhere in the
documentation.
Most importantly of all, Hyperledger Fabric provides a uniquely powerful policy that allows network and channel
administrators to manage policy change itself! The underlying philosophy is that policy change is a constant, whether
it occurs within or between organizations, or whether it is imposed by external regulators. For example, new or-
ganizations may join a channel, or existing organizations may have their permissions increased or decreased. Let’s
investigate a little more how change policy is implemented in Hyperledger Fabric.
They key point of understanding is that policy change is managed by a policy within the policy itself. The modification
policy, or mod_policy for short, is a first class policy within a network or channel configuration that manages change.
Let’s give two brief examples of how we’ve already used mod_policy to manage change in our network!
The first example was when the network was initially set up. At this time, only organization R4 was allowed to manage
the network. In practice, this was achieved by making R4 the only organization defined in the network configuration
NC4 with permissions to network resources. Moreover, the mod_policy for NC4 only mentioned organization R4 –
only R4 was allowed to change this configuration.
We then evolved the network N to also allow organization R1 to administer the network. R4 did this by adding R1 to
the policies for channel creation and consortium creation. Because of this change, R1 was able to define the consortia
X1 and X2, and create the channels C1 and C2. R1 had equal administrative rights over the channel and consortium
policies in the network configuration.
R4 however, could grant even more power over the network configuration to R1! R4 could add R1 to the mod_policy
such that R1 would be able to manage change of the network policy too.
This second power is much more powerful than the first, because now R1 now has full control over the network con-
figuration NC4! This means that R1 can, in principle remove R4’s management rights from the network. In practice,
R4 would configure the mod_policy such that R4 would need to also approve the change, or that all organizations in
the mod_policy would have to approve the change. There’s lots of flexibility to make the mod_policy as sophisticated
as it needs to be to support whatever change process is required.
This is mod_policy at work – it has allowed the graceful evolution of a basic configuration into a sophisticated one.
All the time this has occurred with the agreement of all organization involved. The mod_policy behaves like every
other policy inside a network or channel configuration; it defines a set of organizations that are allowed to change the
mod_policy itself.
We’ve only scratched the surface of the power of policies and mod_policy in particular in this subsection. It is
discussed at much more length in the policy topic, but for now let’s return to our finished network!
Let’s recap what our network looks like using a consistent visual vocabulary. We’ve re-organized it slightly using our
more compact visual syntax, because it better accommodates larger topologies:
In this diagram we see that the Fabric blockchain network consists of two application channels and one ordering
channel. The organizations R1 and R4 are responsible for the ordering channel, R1 and R2 are responsible for the
blue application channel while R2 and R3 are responsible for the red application channel. Client applications A1 is
an element of organization R1, and CA1 is its certificate authority. Note that peer P2 of organization R2 can use the
communication facilities of the blue and the red application channel. Each application channel has its own channel
configuration, in this case CC1 and CC2. The channel configuration of the system channel is part of the network
configuration, NC4.
We’re at the end of our conceptual journey to build a sample Hyperledger Fabric blockchain network. We’ve created a
four organization network with two channels and three peer nodes, with two smart contracts and an ordering service.
It is supported by four certificate authorities. It provides ledger and smart contract services to three client applications,
who can interact with it via the two channels. Take a moment to look through the details of the network in the diagram,
and feel free to read back through the topic to reinforce your knowledge, or go to a more detailed topic.
In this topic, we’ve seen how different organizations share their infrastructure to provide an integrated Hyperledger
Fabric blockchain network. We’ve seen how the collective infrastructure can be organized into channels that provide
private communications mechanisms that are independently managed. We’ve seen how actors such as client applica-
tions, administrators, peers and orderers are identified as being from different organizations by their use of certificates
from their respective certificate authorities. And in turn, we’ve seen the importance of policy to define the agreed
permissions that these organizational actors have over network and channel resources.
4.5 Identity
The different actors in a blockchain network include peers, orderers, client applications, administrators and more. Each
of these actors — active elements inside or outside a network able to consume services — has a digital identity encap-
sulated in an X.509 digital certificate. These identities really matter because they determine the exact permissions
over resources and access to information that actors have in a blockchain network.
A digital identity furthermore has some additional attributes that Fabric uses to determine permissions, and it gives
the union of an identity and the associated attributes a special name — principal. Principals are just like userIDs or
groupIDs, but a little more flexible because they can include a wide range of properties of an actor’s identity, such as
the actor’s organization, organizational unit, role or even the actor’s specific identity. When we talk about principals,
they are the properties which determine their permissions.
For an identity to be verifiable, it must come from a trusted authority. A membership service provider (MSP) is
how this is achieved in Fabric. More specifically, an MSP is a component that defines the rules that govern the
valid identities for this organization. The default MSP implementation in Fabric uses X.509 certificates as identities,
adopting a traditional Public Key Infrastructure (PKI) hierarchical model (more on PKI later).
Imagine that you visit a supermarket to buy some groceries. At the checkout you see a sign that says that only Visa,
Mastercard and AMEX cards are accepted. If you try to pay with a different card — let’s call it an “ImagineCard” —
it doesn’t matter whether the card is authentic and you have sufficient funds in your account. It will be not be accepted.
Having a valid credit card is not enough — it must also be accepted by the store! PKIs and MSPs work together in the
same way — a PKI provides a list of identities, and an MSP says which of these are members of a given organization
that participates in the network.
PKI certificate authorities and MSPs provide a similar combination of functionalities. A PKI is like a card provider —
it dispenses many different types of verifiable identities. An MSP, on the other hand, is like the list of card providers
accepted by the store, determining which identities are the trusted members (actors) of the store payment network.
MSPs turn verifiable identities into the members of a blockchain network.
Let’s drill into these concepts in a little more detail.
A public key infrastructure (PKI) is a collection of internet technologies that provides secure communications
in a network. It’s PKI that puts the S in HTTPS — and if you’re reading this documentation on a web browser, you’re
probably using a PKI to make sure it comes from a verified source.
The elements of Public Key Infrastructure (PKI). A PKI is comprised of Certificate Authorities who issue digital
certificates to parties (e.g., users of a service, service provider), who then use them to authenticate themselves in the
messages they exchange with their environment. A CA’s Certificate Revocation List (CRL) constitutes a reference for
the certificates that are no longer valid. Revocation of a certificate can happen for a number of reasons. For example,
a certificate may be revoked because the cryptographic private material associated to the certificate has been exposed.
Although a blockchain network is more than a communications network, it relies on the PKI standard to ensure
secure communication between various network participants, and to ensure that messages posted on the blockchain are
properly authenticated. It’s therefore important to understand the basics of PKI and then why MSPs are so important.
There are four key elements to PKI:
• Digital Certificates
• Public and Private Keys
• Certificate Authorities
• Certificate Revocation Lists
Let’s quickly describe these PKI basics, and if you want to know more details, Wikipedia is a good place to start.
A digital certificate is a document which holds a set of attributes relating to the holder of the certificate. The most
common type of certificate is the one compliant with the X.509 standard, which allows the encoding of a party’s
identifying details in its structure.
For example, Mary Morris in the Manufacturing Division of Mitchell Cars in Detroit, Michigan might have
a digital certificate with a SUBJECT attribute of C=US, ST=Michigan, L=Detroit, O=Mitchell Cars,
OU=Manufacturing, CN=Mary Morris /UID=123456. Mary’s certificate is similar to her government iden-
tity card — it provides information about Mary which she can use to prove key facts about her. There are many other
attributes in an X.509 certificate, but let’s concentrate on just these for now.
4.5. Identity 45
hyperledger-fabricdocs Documentation, Release master
A digital certificate describing a party called Mary Morris. Mary is the SUBJECT of the certificate, and the high-
lighted SUBJECT text shows key facts about Mary. The certificate also holds many more pieces of information, as you
can see. Most importantly, Mary’s public key is distributed within her certificate, whereas her private signing key is
not. This signing key must be kept private.
What is important is that all of Mary’s attributes can be recorded using a mathematical technique called cryptography
(literally, “secret writing”) so that tampering will invalidate the certificate. Cryptography allows Mary to present her
certificate to others to prove her identity so long as the other party trusts the certificate issuer, known as a Certificate
Authority (CA). As long as the CA keeps certain cryptographic information securely (meaning, its own private
signing key), anyone reading the certificate can be sure that the information about Mary has not been tampered with
— it will always have those particular attributes for Mary Morris. Think of Mary’s X.509 certificate as a digital identity
card that is impossible to change.
Authentication and message integrity are important concepts in secure communications. Authentication requires that
parties who exchange messages are assured of the identity that created a specific message. For a message to have
“integrity” means that cannot have been modified during its transmission. For example, you might want to be sure
you’re communicating with the real Mary Morris rather than an impersonator. Or if Mary has sent you a message, you
might want to be sure that it hasn’t been tampered with by anyone else during transmission.
Traditional authentication mechanisms rely on digital signatures that, as the name suggests, allow a party to digitally
sign its messages. Digital signatures also provide guarantees on the integrity of the signed message.
Technically speaking, digital signature mechanisms require each party to hold two cryptographically connected keys:
a public key that is made widely available and acts as authentication anchor, and a private key that is used to produce
digital signatures on messages. Recipients of digitally signed messages can verify the origin and integrity of a
received message by checking that the attached signature is valid under the public key of the expected sender.
The unique relationship between a private key and the respective public key is the cryptographic magic that
makes secure communications possible. The unique mathematical relationship between the keys is such that the
private key can be used to produce a signature on a message that only the corresponding public key can match, and
only on the same message.
In the example above, Mary uses her private key to sign the message. The signature can be verified by anyone who
sees the signed message using her public key.
As you’ve seen, an actor or a node is able to participate in the blockchain network, via the means of a digital identity
issued for it by an authority trusted by the system. In the most common case, digital identities (or simply identities)
have the form of cryptographically validated digital certificates that comply with X.509 standard and are issued by a
Certificate Authority (CA).
CAs are a common part of internet security protocols, and you’ve probably heard of some of the more popular ones:
Symantec (originally Verisign), GeoTrust, DigiCert, GoDaddy, and Comodo, among others.
A Certificate Authority dispenses certificates to different actors. These certificates are digitally signed by the CA and
4.5. Identity 47
hyperledger-fabricdocs Documentation, Release master
bind together the actor with the actor’s public key (and optionally with a comprehensive list of properties). As a result,
if one trusts the CA (and knows its public key), it can trust that the specific actor is bound to the public key included in
the certificate, and owns the included attributes, by validating the CA’s signature on the actor’s certificate.
Certificates can be widely disseminated, as they do not include either the actors’ nor the CA’s private keys. As such
they can be used as anchor of trusts for authenticating messages coming from different actors.
CAs also have a certificate, which they make widely available. This allows the consumers of identities issued by a given
CA to verify them by checking that the certificate could only have been generated by the holder of the corresponding
private key (the CA).
In a blockchain setting, every actor who wishes to interact with the network needs an identity. In this setting, you might
say that one or more CAs can be used to define the members of an organization’s from a digital perspective. It’s
the CA that provides the basis for an organization’s actors to have a verifiable digital identity.
CAs come in two flavors: Root CAs and Intermediate CAs. Because Root CAs (Symantec, Geotrust, etc) have to
securely distribute hundreds of millions of certificates to internet users, it makes sense to spread this process out
across what are called Intermediate CAs. These Intermediate CAs have their certificates issued by the root CA or
another intermediate authority, allowing the establishment of a “chain of trust” for any certificate that is issued by
any CA in the chain. This ability to track back to the Root CA not only allows the function of CAs to scale while
still providing security — allowing organizations that consume certificates to use Intermediate CAs with confidence
— it limits the exposure of the Root CA, which, if compromised, would endanger the entire chain of trust. If an
Intermediate CA is compromised, on the other hand, there will be a much smaller exposure.
A chain of trust is established between a Root CA and a set of Intermediate CAs as long as the issuing CA for the
certificate of each of these Intermediate CAs is either the Root CA itself or has a chain of trust to the Root CA.
Intermediate CAs provide a huge amount of flexibility when it comes to the issuance of certificates across multiple
organizations, and that’s very helpful in a permissioned blockchain system (like Fabric). For example, you’ll see that
different organizations may use different Root CAs, or the same Root CA with different Intermediate CAs — it really
does depend on the needs of the network.
Fabric CA
It’s because CAs are so important that Fabric provides a built-in CA component to allow you to create CAs in the
blockchain networks you form. This component — known as Fabric CA is a private root CA provider capable of
managing digital identities of Fabric participants that have the form of X.509 certificates. Because Fabric CA is
a custom CA targeting the Root CA needs of Fabric, it is inherently not capable of providing SSL certificates for
general/automatic use in browsers. However, because some CA must be used to manage identity (even in a test
environment), Fabric CA can be used to provide and manage certificates. It is also possible — and fully appropriate
— to use a public/commerical root or intermediate CA to provide identification.
If you’re interested, you can read a lot more about Fabric CA in the CA documentation section.
A Certificate Revocation List (CRL) is easy to understand — it’s just a list of references to certificates that a CA knows
to be revoked for one reason or another. If you recall the store scenario, a CRL would be like a list of stolen credit
cards.
When a third party wants to verify another party’s identity, it first checks the issuing CA’s CRL to make sure that
the certificate has not been revoked. A verifier doesn’t have to check the CRL, but if they don’t they run the risk of
accepting a compromised identity.
Using a CRL to check that a certificate is still valid. If an impersonator tries to pass a compromised digital certificate
to a validating party, it can be first checked against the issuing CA’s CRL to make sure it’s not listed as no longer
valid.
Note that a certificate being revoked is very different from a certificate expiring. Revoked certificates have not expired
— they are, by every other measure, a fully valid certificate. For more in-depth information about CRLs, click here.
Now that you’ve seen how a PKI can provide verifiable identities through a chain of trust, the next step is to see how
these identities can be used to represent the trusted members of a blockchain network. That’s where a Membership
Service Provider (MSP) comes into play — it identifies the parties who are the members of a given organization
in the blockchain network.
To learn more about membership, check out the conceptual documentation on MSPs.
4.5. Identity 49
hyperledger-fabricdocs Documentation, Release master
4.6 Membership
If you’ve read through the documentation on identity you’ve seen how a PKI can provide verifiable identities through
a chain of trust. Now let’s see how these identities can be used to represent the trusted members of a blockchain
network.
This is where a Membership Service Provider (MSP) comes into play — it identifies which Root CAs and Inter-
mediate CAs are trusted to define the members of a trust domain, e.g., an organization, either by listing the
identities of their members, or by identifying which CAs are authorized to issue valid identities for their members, or
— as will usually be the case — through a combination of both.
The power of an MSP goes beyond simply listing who is a network participant or member of a channel. An MSP
can identify specific roles an actor might play either within the scope of the organization the MSP represents (e.g.,
admins, or as members of a sub-organization group), and sets the basis for defining access privileges in the context of
a network and channel (e.g., channel admins, readers, writers).
The configuration of an MSP is advertised to all the channels where members of the corresponding organization
participate (in the form of a channel MSP). In addition to the channel MSP, peers, orderers, and clients also maintain
a local MSP to authenticate member messages outside the context of a channel and to define the permissions over a
particular component (who has the ability to install chaincode on a peer, for example).
In addition, an MSP can allow for the identification of a list of identities that have been revoked — as discussed in the
Identity documentation — but we will talk about how that process also extends to an MSP.
We’ll talk more about local and channel MSPs in a moment. For now let’s see what MSPs do in general.
An organization is a managed group of members. This can be something as big as a multinational corporation or a
small as a flower shop. What’s most important about organizations (or orgs) is that they manage their members under
a single MSP. Note that this is different from the organization concept defined in an X.509 certificate, which we’ll talk
about later.
The exclusive relationship between an organization and its MSP makes it sensible to name the MSP after the organi-
zation, a convention you’ll find adopted in most policy configurations. For example, organization ORG1 would likely
have an MSP called something like ORG1-MSP. In some cases an organization may require multiple membership
groups — for example, where channels are used to perform very different business functions between organizations.
In these cases it makes sense to have multiple MSPs and name them accordingly, e.g., ORG2-MSP-NATIONAL and
ORG2-MSP-GOVERNMENT, reflecting the different membership roots of trust within ORG2 in the NATIONAL sales
channel compared to the GOVERNMENT regulatory channel.
Two different MSP configurations for an organization. The first configuration shows the typical relationship be-
tween an MSP and an organization — a single MSP defines the list of members of an organization. In the second
configuration, different MSPs are used to represent different organizational groups with national, international, and
governmental affiliation.
An organization is often divided up into multiple organizational units (OUs), each of which has a certain
set of responsibilities. For example, the ORG1 organization might have both ORG1-MANUFACTURING and
ORG1-DISTRIBUTION OUs to reflect these separate lines of business. When a CA issues X.509 certificates, the OU
field in the certificate specifies the line of business to which the identity belongs.
We’ll see later how OUs can be helpful to control the parts of an organization who are considered to be the members
of a blockchain network. For example, only identities from the ORG1-MANUFACTURING OU might be able to access
a channel, whereas ORG1-DISTRIBUTION cannot.
Finally, though this is a slight misuse of OUs, they can sometimes be used by different organizations in a consortium
to distinguish each other. In such cases, the different organizations use the same Root CAs and Intermediate CAs for
their chain of trust, but assign the OU field to identify members of each organization. We’ll also see how to configure
MSPs to achieve this later.
MSPs appear in two places in a blockchain network: channel configuration (channel MSPs), and locally on an actor’s
premise (local MSP). Local MSPs are defined for clients (users) and for nodes (peers and orderers). Node local
MSPs define the permissions for that node (who the peer admins are, for example). The local MSPs of the users allow
the user side to authenticate itself in its transactions as a member of a channel (e.g. in chaincode transactions), or as
the owner of a specific role into the system (an org admin, for example, in configuration transactions).
Every node and user must have a local MSP defined, as it defines who has administrative or participatory rights at
that level (peer admins will not necessarily be channel admins, and vice versa).
In contrast, channel MSPs define administrative and participatory rights at the channel level. Every organization
participating in a channel must have an MSP defined for it. Peers and orderers on a channel will all share the same
view of channel MSPs, and will therefore be able to correctly authenticate the channel participants. This means that
if an organization wishes to join the channel, an MSP incorporating the chain of trust for the organization’s members
4.6. Membership 51
hyperledger-fabricdocs Documentation, Release master
would need to be included in the channel configuration. Otherwise transactions originating from this organization’s
identities will be rejected.
The key difference here between local and channel MSPs is not how they function — both turn identities into roles —
but their scope.
Local and channel MSPs. The trust domain (e.g., the organization) of each peer is defined by the peer’s local MSP,
e.g., ORG1 or ORG2. Representation of an organization on a channel is achieved by adding the organization’s MSP
to the channel configuration. For example, the channel of this figure is managed by both ORG1 and ORG2. Similar
principles apply for the network, orderers, and users, but these are not shown here for simplicity.
You may find it helpful to see how local and channel MSPs are used by seeing what happens when a blockchain
administrator installs and instantiates a smart contract, as shown in the diagram above.
An administrator B connects to the peer with an identity issued by RCA1 and stored in their local MSP. When B tries to
install a smart contract on the peer, the peer checks its local MSP, ORG1-MSP, to verify that the identity of B is indeed
a member of ORG1. A successful verification will allow the install command to complete successfully. Subsequently,
B wishes to instantiate the smart contract on the channel. Because this is a channel operation, all organizations on the
channel must agree to it. Therefore, the peer must check the MSPs of the channel before it can successfully commit
this command. (Other things must happen too, but concentrate on the above for now.)
Local MSPs are only defined on the file system of the node or user to which they apply. Therefore, physically and
logically there is only one local MSP per node or user. However, as channel MSPs are available to all nodes in the
channel, they are logically defined once in the channel configuration. However, a channel MSP is also instantiated
on the file system of every node in the channel and kept synchronized via consensus. So while there is a copy of
each channel MSP on the local file system of every node, logically a channel MSP resides on and is maintained by the
channel or the network.
The split between channel and local MSPs reflects the needs of organizations to administer their local resources, such
as a peer or orderer nodes, and their channel resources, such as ledgers, smart contracts, and consortia, which operate
at the channel or network level. It’s helpful to think of these MSPs as being at different levels, with MSPs at a
higher level relating to network administration concerns while MSPs at a lower level handle identity for the
administration of private resources. MSPs are mandatory at every level of administration — they must be defined
for the network, channel, peer, orderer, and users.
MSP Levels. The MSPs for the peer and orderer are local, whereas the MSPs for a channel (including the network
configuration channel) are shared across all participants of that channel. In this figure, the network configuration
channel is administered by ORG1, but another application channel can be managed by ORG1 and ORG2. The peer
is a member of and managed by ORG2, whereas ORG1 manages the orderer of the figure. ORG1 trusts identities
from RCA1, whereas ORG2 trusts identities from RCA2. Note that these are administration identities, reflecting who
can administer these components. So while ORG1 administers the network, ORG2.MSP does exist in the network
definition.
• Network MSP: The configuration of a network defines who are the members in the network — by defining
the MSPs of the participant organizations — as well as which of these members are authorized to perform
administrative tasks (e.g., creating a channel).
• Channel MSP: It is important for a channel to maintain the MSPs of its members separately. A channel provides
private communications between a particular set of organizations which in turn have administrative control over
it. Channel policies interpreted in the context of that channel’s MSPs define who has ability to participate in
certain action on the channel, e.g., adding organizations, or instantiating chaincodes. Note that there is no
necessary relationship between the permission to administrate a channel and the ability to administrate the
network configuration channel (or any other channel). Administrative rights exist within the scope of what is
being administrated (unless the rules have been written otherwise — see the discussion of the ROLE attribute
below).
• Peer MSP: This local MSP is defined on the file system of each peer and there is a single MSP instance for
each peer. Conceptually, it performs exactly the same function as channel MSPs with the restriction that it only
applies to the peer where it is defined. An example of an action whose authorization is evaluated using the peer’s
local MSP is the installation of a chaincode on the peer.
• Orderer MSP: Like a peer MSP, an orderer local MSP is also defined on the file system of the node and only
applies to that node. Like peer nodes, orderers are also owned by a single organization and therefore have a
single MSP to list the actors or nodes it trusts.
So far, you’ve seen that the most important element of an MSP are the specification of the root or intermediate CAs
that are used to establish an actor’s or node’s membership in the respective organization. There are, however, more
elements that are used in conjunction with these two to assist with membership functions.
4.6. Membership 53
hyperledger-fabricdocs Documentation, Release master
The figure above shows how a local MSP is stored on a local filesystem. Even though channel MSPs are not physically
structured in exactly this way, it’s still a helpful way to think about them.
As you can see, there are nine elements to an MSP. It’s easiest to think of these elements in a directory structure, where
the MSP name is the root folder name with each subfolder representing different elements of an MSP configuration.
Let’s describe these folders in a little more detail and see why they are important.
• Root CAs: This folder contains a list of self-signed X.509 certificates of the Root CAs trusted by the organiza-
tion represented by this MSP. There must be at least one Root CA X.509 certificate in this MSP folder.
This is the most important folder because it identifies the CAs from which all other certificates must be derived
to be considered members of the corresponding organization.
• Intermediate CAs: This folder contains a list of X.509 certificates of the Intermediate CAs trusted by this
organization. Each certificate must be signed by one of the Root CAs in the MSP or by an Intermediate CA
whose issuing CA chain ultimately leads back to a trusted Root CA.
An intermediate CA may represent a different subdivision of the organization (like ORG1-MANUFACTURING
and ORG1-DISTRIBUTION do for ORG1), or the organization itself (as may be the case if a commercial CA
is leveraged for the organization’s identity management). In the latter case intermediate CAs can be used to rep-
resent organization subdivisions. Here you may find more information on best practices for MSP configuration.
Notice, that it is possible to have a functioning network that does not have an Intermediate CA, in which case
this folder would be empty.
Like the Root CA folder, this folder defines the CAs from which certificates must be issued to be considered
members of the organization.
• Organizational Units (OUs): These are listed in the $FABRIC_CFG_PATH/msp/config.yaml file and
contain a list of organizational units, whose members are considered to be part of the organization represented
by this MSP. This is particularly useful when you want to restrict the members of an organization to the ones
holding an identity (signed by one of MSP designated CAs) with a specific OU in it.
Specifying OUs is optional. If no OUs are listed, all the identities that are part of an MSP — as identified by the
Root CA and Intermediate CA folders — will be considered members of the organization.
• Administrators: This folder contains a list of identities that define the actors who have the role of administrators
for this organization. For the standard MSP type, there should be one or more X.509 certificates in this list.
It’s worth noting that just because an actor has the role of an administrator it doesn’t mean that they can ad-
minister particular resources! The actual power a given identity has with respect to administering the system
is determined by the policies that manage system resources. For example, a channel policy might specify that
ORG1-MANUFACTURING administrators have the rights to add new organizations to the channel, whereas the
ORG1-DISTRIBUTION administrators have no such rights.
Even though an X.509 certificate has a ROLE attribute (specifying, for example, that an actor is an admin),
this refers to an actor’s role within its organization rather than on the blockchain network. This is similar to the
purpose of the OU attribute, which — if it has been defined — refers to an actor’s place in the organization.
The ROLE attribute can be used to confer administrative rights at the channel level if the policy for that channel
has been written to allow any administrator from an organization (or certain organizations) permission to per-
form certain channel functions (such as instantiating chaincode). In this way, an organizational role can confer
a network role.
• Revoked Certificates: If the identity of an actor has been revoked, identifying information about the identity
— not the identity itself — is held in this folder. For X.509-based identities, these identifiers are pairs of strings
known as Subject Key Identifier (SKI) and Authority Access Identifier (AKI), and are checked whenever the
X.509 certificate is being used to make sure the certificate has not been revoked.
This list is conceptually the same as a CA’s Certificate Revocation List (CRL), but it also relates to revocation
of membership from the organization. As a result, the administrator of an MSP, local or channel, can quickly
revoke an actor or node from an organization by advertising the updated CRL of the CA the revoked certificate
as issued by. This “list of lists” is optional. It will only become populated as certificates are revoked.
• Node Identity: This folder contains the identity of the node, i.e., cryptographic material that — in combination
to the content of KeyStore — would allow the node to authenticate itself in the messages that is sends to other
participants of its channels and network. For X.509 based identities, this folder contains an X.509 certificate.
This is the certificate a peer places in a transaction proposal response, for example, to indicate that the peer
has endorsed it — which can subsequently be checked against the resulting transaction’s endorsement policy at
validation time.
This folder is mandatory for local MSPs, and there must be exactly one X.509 certificate for the node. It is not
used for channel MSPs.
• KeyStore for Private Key: This folder is defined for the local MSP of a peer or orderer node (or in an
client’s local MSP), and contains the node’s signing key. This key matches cryptographically the node’s identity
included in Node Identity folder and is used to sign data — for example to sign a transaction proposal response,
as part of the endorsement phase.
This folder is mandatory for local MSPs, and must contain exactly one private key. Obviously, access to this
folder must be limited only to the identities of users who have administrative responsibility on the peer.
Configuration of a channel MSPs does not include this folder, as channel MSPs solely aim to offer identity
validation functionalities and not signing abilities.
• TLS Root CA: This folder contains a list of self-signed X.509 certificates of the Root CAs trusted by this
organization for TLS communications. An example of a TLS communication would be when a peer needs to
connect to an orderer so that it can receive ledger updates.
MSP TLS information relates to the nodes inside the network — the peers and the orderers, in other words,
rather than the applications and administrations that consume the network.
There must be at least one TLS Root CA X.509 certificate in this folder.
• TLS Intermediate CA: This folder contains a list intermediate CA certificates CAs trusted by the organization
represented by this MSP for TLS communications. This folder is specifically useful when commercial CAs are
used for TLS certificates of an organization. Similar to membership intermediate CAs, specifying intermediate
TLS CAs is optional.
For more information about TLS, click here.
If you’ve read this doc as well as our doc on Identity), you should have a pretty good grasp of how identities and mem-
bership work in Hyperledger Fabric. You’ve seen how a PKI and MSPs are used to identify the actors collaborating in
4.6. Membership 55
hyperledger-fabricdocs Documentation, Release master
a blockchain network. You’ve learned how certificates, public/private keys, and roots of trust work, in addition to how
MSPs are physically and logically structured.
4.7 Peers
A blockchain network is comprised primarily of a set of peer nodes (or, simply, peers). Peers are a fundamental
element of the network because they host ledgers and smart contracts. Recall that a ledger immutably records all
the transactions generated by smart contracts (or chaincode). Smart contracts and ledgers are used to encapsulate the
shared processes and shared information in a network, respectively. These aspects of a peer make them a good starting
point to understand a Hyperledger Fabric network.
Other elements of the blockchain network are of course important: ledgers and smart contracts, orderers, policies,
channels, applications, organizations, identities, and membership, and you can read more about them in their own
dedicated sections. This section focusses on peers, and their relationship to those other elements in a Hyperledger
Fabric network.
A blockchain network is comprised of peer nodes, each of which can hold copies of ledgers and copies of smart
contracts. In this example, the network N consists of peers P1, P2 and P3, each of which maintain their own instance
of the distributed ledger L1. P1, P2 and P3 use the same chaincode, S1, to access their copy of that distributed ledger.
Peers can be created, started, stopped, reconfigured, and even deleted. They expose a set of APIs that enable admin-
istrators and applications to interact with the services that they provide. We’ll learn more about these services in this
section.
Hyperledger Fabric implements smart contracts with a technology concept it calls chaincode — simply a piece of
code that accesses the ledger, written in one of the supported programming languages. In this topic, we’ll usually use
the term chaincode, but feel free to read it as smart contract if you’re more used to that term. It’s the same thing!
Let’s look at a peer in a little more detail. We can see that it’s the peer that hosts both the ledger and chaincode.
More accurately, the peer actually hosts instances of the ledger, and instances of chaincode. Note that this provides a
deliberate redundancy in a Fabric network — it avoids single points of failure. We’ll learn more about the distributed
and decentralized nature of a blockchain network later in this section.
A peer hosts instances of ledgers and instances of chaincodes. In this example, P1 hosts an instance of ledger L1 and
an instance of chaincode S1. There can be many ledgers and chaincodes hosted on an individual peer.
Because a peer is a host for ledgers and chaincodes, applications and administrators must interact with a peer if they
want to access these resources. That’s why peers are considered the most fundamental building blocks of a Hyperledger
Fabric network. When a peer is first created, it has neither ledgers nor chaincodes. We’ll see later how ledgers get
created, and how chaincodes get installed, on peers.
Multiple Ledgers
A peer is able to host more than one ledger, which is helpful because it allows for a flexible system design. The
simplest configuration is for a peer to manage a single ledger, but it’s absolutely appropriate for a peer to host two or
more ledgers when required.
A peer hosting multiple ledgers. Peers host one or more ledgers, and each ledger has zero or more chaincodes that
4.7. Peers 57
hyperledger-fabricdocs Documentation, Release master
apply to them. In this example, we can see that the peer P1 hosts ledgers L1 and L2. Ledger L1 is accessed using
chaincode S1. Ledger L2 on the other hand can be accessed using chaincodes S1 and S2.
Although it is perfectly possible for a peer to host a ledger instance without hosting any chaincodes which access that
ledger, it’s rare that peers are configured this way. The vast majority of peers will have at least one chaincode installed
on it which can query or update the peer’s ledger instances. It’s worth mentioning in passing that, whether or not users
have installed chaincodes for use by external applications, peers also have special system chaincodes that are always
present. These are not discussed in detail in this topic.
Multiple Chaincodes
There isn’t a fixed relationship between the number of ledgers a peer has and the number of chaincodes that can access
that ledger. A peer might have many chaincodes and many ledgers available to it.
An example of a peer hosting multiple chaincodes. Each ledger can have many chaincodes which access it. In this
example, we can see that peer P1 hosts ledgers L1 and L2, where L1 is accessed by chaincodes S1 and S2, and L2 is
accessed by S1 and S3. We can see that S1 can access both L1 and L2.
We’ll see a little later why the concept of channels in Hyperledger Fabric is important when hosting multiple ledgers
or multiple chaincodes on a peer.
We’re now going to show how applications interact with peers to access the ledger. Ledger-query interactions involve
a simple three-step dialogue between an application and a peer; ledger-update interactions are a little more involved,
and require two extra steps. We’ve simplified these steps a little to help you get started with Hyperledger Fabric, but
don’t worry — what’s most important to understand is the difference in application-peer interactions for ledger-query
compared to ledger-update transaction styles.
Applications always connect to peers when they need to access ledgers and chaincodes. The Hyperledger Fabric
Software Development Kit (SDK) makes this easy for programmers — its APIs enable applications to connect to peers,
invoke chaincodes to generate transactions, submit transactions to the network that will get ordered and committed to
the distributed ledger, and receive events when this process is complete.
Through a peer connection, applications can execute chaincodes to query or update a ledger. The result of a ledger
query transaction is returned immediately, whereas ledger updates involve a more complex interaction between appli-
cations, peers and orderers. Let’s investigate this in a little more detail.
Peers, in conjunction with orderers, ensure that the ledger is kept up-to-date on every peer. In this example, application
A connects to P1 and invokes chaincode S1 to query or update the ledger L1. P1 invokes S1 to generate a proposal
response that contains a query result or a proposed ledger update. Application A receives the proposal response and,
for queries, the process is now complete. For updates, A builds a transaction from all of the responses, which it sends
it to O1 for ordering. O1 collects transactions from across the network into blocks, and distributes these to all peers,
including P1. P1 validates the transaction before applying to L1. Once L1 is updated, P1 generates an event, received
by A, to signify completion.
A peer can return the results of a query to an application immediately since all of the information required to satisfy the
query is in the peer’s local copy of the ledger. Peers never consult with other peers in order to respond to a query from
an application. Applications can, however, connect to one or more peers to issue a query; for example, to corroborate
a result between multiple peers, or retrieve a more up-to-date result from a different peer if there’s a suspicion that
information might be out of date. In the diagram, you can see that ledger query is a simple three-step process.
An update transaction starts in the same way as a query transaction, but has two extra steps. Although ledger-updating
applications also connect to peers to invoke a chaincode, unlike with ledger-querying applications, an individual peer
cannot perform a ledger update at this time, because other peers must first agree to the change — a process called
consensus. Therefore, peers return to the application a proposed update — one that this peer would apply subject to
other peers’ prior agreement. The first extra step — step four — requires that applications send an appropriate set of
matching proposed updates to the entire network of peers as a transaction for commitment to their respective ledgers.
This is achieved by the application using an orderer to package transactions into blocks, and distribute them to the
entire network of peers, where they can be verified before being applied to each peer’s local copy of the ledger. As
this whole ordering processing takes some time to complete (seconds), the application is notified asynchronously, as
shown in step five.
Later in this section, you’ll learn more about the detailed nature of this ordering process — and for a really detailed
look at this process see the Transaction Flow topic.
Although this section is about peers rather than channels, it’s worth spending a little time understanding how peers
interact with each other, and with applications, via channels — a mechanism by which a set of components within a
blockchain network can communicate and transact privately.
These components are typically peer nodes, orderer nodes and applications and, by joining a channel, they agree to
collaborate to collectively share and manage identical copies of the ledger associated with that channel. Conceptually,
you can think of channels as being similar to groups of friends (though the members of a channel certainly don’t need
4.7. Peers 59
hyperledger-fabricdocs Documentation, Release master
to be friends!). A person might have several groups of friends, with each group having activities they do together.
These groups might be totally separate (a group of work friends as compared to a group of hobby friends), or there
can be some crossover between them. Nevertheless, each group is its own entity, with “rules” of a kind.
Channels allow a specific set of peers and applications to communicate with each other within a blockchain network.
In this example, application A can communicate directly with peers P1 and P2 using channel C. You can think of the
channel as a pathway for communications between particular applications and peers. (For simplicity, orderers are
not shown in this diagram, but must be present in a functioning network.)
We see that channels don’t exist in the same way that peers do — it’s more appropriate to think of a channel as a
logical structure that is formed by a collection of physical peers. It is vital to understand this point — peers provide
the control point for access to, and management of, channels.
Now that you understand peers and their relationship to ledgers, chaincodes and channels, you’ll be able to see how
multiple organizations come together to form a blockchain network.
Blockchain networks are administered by a collection of organizations rather than a single organization. Peers are
central to how this kind of distributed network is built because they are owned by — and are the connection points to
the network for — these organizations.
Peers in a blockchain network with multiple organizations. The blockchain network is built up from the peers owned
and contributed by the different organizations. In this example, we see four organizations contributing eight peers to
form a network. The channel C connects five of these peers in the network N — P1, P3, P5, P7 and P8. The other
peers owned by these organizations have not been joined to this channel, but are typically joined to at least one other
channel. Applications that have been developed by a particular organization will connect to their own organization’s
peers as well as those of different organizations. Again, for simplicity, an orderer node is not shown in this diagram.
It’s really important that you can see what’s happening in the formation of a blockchain network. The network is
both formed and managed by the multiple organizations who contribute resources to it. Peers are the resources that
we’re discussing in this topic, but the resources an organization provides are more than just peers. There’s a principle
at work here — the network literally does not exist without organizations contributing their individual resources
to the collective network. Moreover, the network grows and shrinks with the resources that are provided by these
collaborating organizations.
You can see that (other than the ordering service) there are no centralized resources — in the example above, the
network, N, would not exist if the organizations did not contribute their peers. This reflects the fact that the network
does not exist in any meaningful sense unless and until organizations contribute the resources that form it. Moreover,
the network does not depend on any individual organization — it will continue to exist as long as one organization
remains, no matter which other organizations may come and go. This is at the heart of what it means for a network to
be decentralized.
Applications in different organizations, as in the example above, may or may not be the same. That’s because it’s
entirely up to an organization as to how its applications process their peers’ copies of the ledger. This means that both
application and presentation logic may vary from organization to organization even though their respective peers host
exactly the same ledger data.
Applications connect either to peers in their organization, or peers in another organization, depending on the nature of
the ledger interaction that’s required. For ledger-query interactions, applications typically connect to their own orga-
nization’s peers. For ledger-update interactions, we’ll see later why applications need to connect to peers representing
every organization that is required to endorse the ledger update.
Now that you’ve seen how peers from different organizations come together to form a blockchain network, it’s worth
spending a few moments understanding how peers get assigned to organizations by their administrators.
Peers have an identity assigned to them via a digital certificate from a particular certificate authority. You can read
lots more about how X.509 digital certificates work elsewhere in this guide but, for now, think of a digital certificate
as being like an ID card that provides lots of verifiable information about a peer. Each and every peer in the network
is assigned a digital certificate by an administrator from its owning organization.
4.7. Peers 61
hyperledger-fabricdocs Documentation, Release master
When a peer connects to a channel, its digital certificate identifies its owning organization via a channel MSP. In this
example, P1 and P2 have identities issued by CA1. Channel C determines from a policy in its channel configuration
that identities from CA1 should be associated with Org1 using ORG1.MSP. Similarly, P3 and P4 are identified by
ORG2.MSP as being part of Org2.
Whenever a peer connects using a channel to a blockchain network, a policy in the channel configuration uses the
peer’s identity to determine its rights. The mapping of identity to organization is provided by a component called
a Membership Service Provider (MSP) — it determines how a peer gets assigned to a specific role in a particular
organization and accordingly gains appropriate access to blockchain resources. Moreover, a peer can be owned only
by a single organization, and is therefore associated with a single MSP. We’ll learn more about peer access control
later in this section, and there’s an entire section on MSPs and access control policies elsewhere in this guide. But
for now, think of an MSP as providing linkage between an individual identity and a particular organizational role in a
blockchain network.
To digress for a moment, peers as well as everything that interacts with a blockchain network acquire their organi-
zational identity from their digital certificate and an MSP. Peers, applications, end users, administrators and orderers
must have an identity and an associated MSP if they want to interact with a blockchain network. We give a name
to every entity that interacts with a blockchain network using an identity — a principal. You can learn lots more
about principals and organizations elsewhere in this guide, but for now you know more than enough to continue your
understanding of peers!
Finally, note that it’s not really important where the peer is physically located — it could reside in the cloud, or in a
data centre owned by one of the organizations, or on a local machine — it’s the identity associated with it that identifies
it as being owned by a particular organization. In our example above, P3 could be hosted in Org1’s data center, but as
long as the digital certificate associated with it is issued by CA2, then it’s owned by Org2.
We’ve seen that peers form the basis for a blockchain network, hosting ledgers and chaincode which can be queried
and updated by peer-connected applications. However, the mechanism by which applications and peers interact with
each other to ensure that every peer’s ledger is kept consistent is mediated by special nodes called orderers, and it’s to
these nodes we now turn our attention.
An update transaction is quite different from a query transaction because a single peer cannot, on its own, update the
ledger — updating requires the consent of other peers in the network. A peer requires other peers in the network to
approve a ledger update before it can be applied to a peer’s local ledger. This process is called consensus, which takes
much longer to complete than a simple query. But when all the peers required to approve the transaction do so, and the
transaction is committed to the ledger, peers will notify their connected applications that the ledger has been updated.
You’re about to be shown a lot more detail about how peers and orderers manage the consensus process in this section.
Specifically, applications that want to update the ledger are involved in a 3-phase process, which ensures that all the
peers in a blockchain network keep their ledgers consistent with each other. In the first phase, applications work with
a subset of endorsing peers, each of which provide an endorsement of the proposed ledger update to the application,
but do not apply the proposed update to their copy of the ledger. In the second phase, these separate endorsements
are collected together as transactions and packaged into blocks. In the final phase, these blocks are distributed back to
every peer where each transaction is validated before being applied to that peer’s copy of the ledger.
As you will see, orderer nodes are central to this process, so let’s investigate in a little more detail how applications
and peers use orderers to generate ledger updates that can be consistently applied to a distributed, replicated ledger.
Phase 1: Proposal
Phase 1 of the transaction workflow involves an interaction between an application and a set of peers — it does not
involve orderers. Phase 1 is only concerned with an application asking different organizations’ endorsing peers to
agree to the results of the proposed chaincode invocation.
To start phase 1, applications generate a transaction proposal which they send to each of the required set of peers for
endorsement. Each of these endorsing peers then independently executes a chaincode using the transaction proposal
to generate a transaction proposal response. It does not apply this update to the ledger, but rather simply signs it and
returns it to the application. Once the application has received a sufficient number of signed proposal responses, the
first phase of the transaction flow is complete. Let’s examine this phase in a little more detail.
Transaction proposals are independently executed by peers who return endorsed proposal responses. In this example,
application A1 generates transaction T1 proposal P which it sends to both peer P1 and peer P2 on channel C. P1
executes S1 using transaction T1 proposal P generating transaction T1 response R1 which it endorses with E1. Inde-
pendently, P2 executes S1 using transaction T1 proposal P generating transaction T1 response R2 which it endorses
with E2. Application A1 receives two endorsed responses for transaction T1, namely E1 and E2.
Initially, a set of peers are chosen by the application to generate a set of proposed ledger updates. Which peers are
chosen by the application? Well, that depends on the endorsement policy (defined for a chaincode), which defines the
set of organizations that need to endorse a proposed ledger change before it can be accepted by the network. This
4.7. Peers 63
hyperledger-fabricdocs Documentation, Release master
is literally what it means to achieve consensus — every organization who matters must have endorsed the proposed
ledger change before it will be accepted onto any peer’s ledger.
A peer endorses a proposal response by adding its digital signature, and signing the entire payload using its private
key. This endorsement can be subsequently used to prove that this organization’s peer generated a particular response.
In our example, if peer P1 is owned by organization Org1, endorsement E1 corresponds to a digital proof that “Trans-
action T1 response R1 on ledger L1 has been provided by Org1’s peer P1!”.
Phase 1 ends when the application receives signed proposal responses from sufficient peers. We note that different
peers can return different and therefore inconsistent transaction responses to the application for the same transaction
proposal. It might simply be that the result was generated at different times on different peers with ledgers at different
states, in which case an application can simply request a more up-to-date proposal response. Less likely, but much
more seriously, results might be different because the chaincode is non-deterministic. Non-determinism is the enemy
of chaincodes and ledgers and if it occurs it indicates a serious problem with the proposed transaction, as inconsis-
tent results cannot, obviously, be applied to ledgers. An individual peer cannot know that their transaction result is
non-deterministic — transaction responses must be gathered together for comparison before non-determinism can be
detected. (Strictly speaking, even this is not enough, but we defer this discussion to the transaction section, where
non-determinism is discussed in detail.)
At the end of phase 1, the application is free to discard inconsistent transaction responses if it wishes to do so,
effectively terminating the transaction workflow early. We’ll see later that if an application tries to use an inconsistent
set of transaction responses to update the ledger, it will be rejected.
Phase 2: Packaging
The second phase of the transaction workflow is the packaging phase. The orderer is pivotal to this process —
it receives transactions containing endorsed transaction proposal responses from many applications. It orders each
transaction relative to other transactions, and packages batches of transactions into blocks ready for distribution back
to all peers connected to the orderer, including the original endorsing peers.
The first role of an orderer node is to package proposed ledger updates. In this example, application A1 sends a
transaction T1 endorsed by E1 and E2 to the orderer O1. In parallel, Application A2 sends transaction T2 endorsed
by E1 to the orderer O1. O1 packages transaction T1 from application A1 and transaction T2 from application
A2 together with other transactions from other applications in the network into block B2. We can see that in B2, the
transaction order is T1,T2,T3,T4,T6,T5 – which may not be the order in which these transactions arrived at the orderer
node! (This example shows a very simplified orderer configuration.)
An orderer receives proposed ledger updates concurrently from many different applications in the network on a partic-
ular channel. Its job is to arrange these proposed updates into a well-defined sequence, and package them into blocks
for subsequent distribution. These blocks will become the blocks of the blockchain! Once an orderer has generated a
block of the desired size, or after a maximum elapsed time, it will be sent to all peers connected to it on a particular
channel. We’ll see how this block is processed in phase 3.
It’s worth noting that the sequencing of transactions in a block is not necessarily the same as the order of arrival of
transactions at the orderer! Transactions can be packaged in any order into a block, and it’s this sequence that becomes
the order of execution. What’s important is that there is a strict order, rather than what that order is.
This strict ordering of transactions within blocks makes Hyperledger Fabric a little different from other blockchains
where the same transaction can be packaged into multiple different blocks. In Hyperledger Fabric, this cannot happen
— the blocks generated by a collection of orderers are said to be final because once a transaction has been written to a
block, its position in the ledger is immutably assured. Hyperledger Fabric’s finality means that a disastrous occurrence
known as a ledger fork cannot occur. Once transactions are captured in a block, history cannot be rewritten for that
transaction at a future point in time.
We can see also see that, whereas peers host the ledger and chaincodes, orderers most definitely do not. Every
transaction that arrives at an orderer is mechanically packaged in a block — the orderer makes no judgement as to the
value of a transaction, it simply packages it. That’s an important property of Hyperledger Fabric — all transactions
are marshalled into a strict order — transactions are never dropped or de-prioritized.
At the end of phase 2, we see that orderers have been responsible for the simple but vital processes of collecting
proposed transaction updates, ordering them, packaging them into blocks, ready for distribution.
Phase 3: Validation
The final phase of the transaction workflow involves the distribution and subsequent validation of blocks from the
orderer to the peers, where they can be applied to the ledger. Specifically, at each peer, every transaction within a
block is validated to ensure that it has been consistently endorsed by all relevant organizations before it is applied to
the ledger. Failed transactions are retained for audit, but are not applied to the ledger.
The second role of an orderer node is to distribute blocks to peers. In this example, orderer O1 distributes block B2 to
peer P1 and peer P2. Peer P1 processes block B2, resulting in a new block being added to ledger L1 on P1. In parallel,
peer P2 processes block B2, resulting in a new block being added to ledger L1 on P2. Once this process is complete,
the ledger L1 has been consistently updated on peers P1 and P2, and each may inform connected applications that the
transaction has been processed.
Phase 3 begins with the orderer distributing blocks to all peers connected to it. Peers are connected to orderers on
channels such that when a new block is generated, all of the peers connected to the orderer will be sent a copy of the
new block. Each peer will process this block independently, but in exactly the same way as every other peer on the
channel. In this way, we’ll see that the ledger can be kept consistent. It’s also worth noting that not every peer needs to
4.7. Peers 65
hyperledger-fabricdocs Documentation, Release master
be connected to an orderer — peers can cascade blocks to other peers using the gossip protocol, who also can process
them independently. But let’s leave that discussion to another time!
Upon receipt of a block, a peer will process each transaction in the sequence in which it appears in the block. For
every transaction, each peer will verify that the transaction has been endorsed by the required organizations according
to the endorsement policy of the chaincode which generated the transaction. For example, some transactions may
only need to be endorsed by a single organization, whereas others may require multiple endorsements before they are
considered valid. This process of validation verifies that all relevant organizations have generated the same outcome
or result. Also note that this validation is different than the endorsement check in phase 1, where it is the application
that receives the response from endorsing peers and makes the decision to send the proposal transactions. In case the
application violates the endorsement policy by sending wrong transactions, the peer is still able to reject the transaction
in the validation process of phase 3.
If a transaction has been endorsed correctly, the peer will attempt to apply it to the ledger. To do this, a peer must
perform a ledger consistency check to verify that the current state of the ledger is compatible with the state of the
ledger when the proposed update was generated. This may not always be possible, even when the transaction has
been fully endorsed. For example, another transaction may have updated the same asset in the ledger such that the
transaction update is no longer valid and therefore can no longer be applied. In this way each peer’s copy of the ledger
is kept consistent across the network because they each follow the same rules for validation.
After a peer has successfully validated each individual transaction, it updates the ledger. Failed transactions are not
applied to the ledger, but they are retained for audit purposes, as are successful transactions. This means that peer
blocks are almost exactly the same as the blocks received from the orderer, except for a valid or invalid indicator on
each transaction in the block.
We also note that phase 3 does not require the running of chaincodes — this is done only during phase 1, and that’s im-
portant. It means that chaincodes only have to be available on endorsing nodes, rather than throughout the blockchain
network. This is often helpful as it keeps the logic of the chaincode confidential to endorsing organizations. This is
in contrast to the output of the chaincodes (the transaction proposal responses) which are shared with every peer in
the channel, whether or not they endorsed the transaction. This specialization of endorsing peers is designed to help
scalability.
Finally, every time a block is committed to a peer’s ledger, that peer generates an appropriate event. Block events
include the full block content, while block transaction events include summary information only, such as whether each
transaction in the block has been validated or invalidated. Chaincode events that the chaincode execution has produced
can also be published at this time. Applications can register for these event types so that they can be notified when
they occur. These notifications conclude the third and final phase of the transaction workflow.
In summary, phase 3 sees the blocks which are generated by the orderer consistently applied to the ledger. The strict
ordering of transactions into blocks allows each peer to validate that transaction updates are consistently applied across
the blockchain network.
This entire transaction workflow process is called consensus because all peers have reached agreement on the order and
content of transactions, in a process that is mediated by orderers. Consensus is a multi-step process and applications
are only notified of ledger updates when the process is complete — which may happen at slightly different times on
different peers.
We will discuss orderers in a lot more detail in a future orderer topic, but for now, think of orderers as nodes which
collect and distribute proposed ledger updates from applications for peers to validate and include on the ledger.
That’s it! We’ve now finished our tour of peers and the other components that they relate to in Hyperledger Fabric.
We’ve seen that peers are in many ways the most fundamental element — they form the network, host chaincodes
and the ledger, handle transaction proposals and responses, and keep the ledger up-to-date by consistently applying
transaction updates to it.
In cases where a group of organizations on a channel need to keep data private from other organizations on that
channel, they have the option to create a new channel comprising just the organizations who need access to the data.
However, creating separate channels in each of these cases creates additional administrative overhead (maintaining
chaincode versions, policies, MSPs, etc), and doesn’t allow for use cases in which you want all channel participants to
see a transaction while keeping a portion of the data private.
That’s why, starting in v1.2, Fabric offers the ability to create private data collections, which allow a defined subset
of organizations on a channel the ability to endorse, commit, or query private data without having to create a separate
channel.
Collection members may decide to share the private data with other parties if they get into a dispute or if they want to
transfer the asset to a third party. The third party can then compute the hash of the private data and see if it matches
the state on the channel ledger, proving that the state existed between the collection members at a certain point in time.
• Use channels when entire transactions (and ledgers) must be kept confidential within a set of organizations that
are members of the channel.
• Use collections when transactions (and ledgers) must be shared among a set of organizations, but when only
a subset of those organizations should have access to some (or all) of the data within a transaction. Addition-
ally, since private data is disseminated peer-to-peer rather than via blocks, use private data collections when
transaction data must be kept confidential from ordering service nodes.
Using this example, peers owned by the Distributor will have multiple private databases inside their ledger which
includes the private data from the Distributor, Farmer and Shipper relationship and the Distributor and Wholesaler
relationship. Because these databases are kept separate from the database that holds the channel ledger, private data is
sometimes referred to as “SideDB”.
When private data collections are referenced in chaincode, the transaction flow is slightly different in order to protect
the confidentiality of the private data as transactions are proposed, endorsed, and committed to the ledger.
For details on transaction flows that don’t use private data refer to our documentation on transaction flow.
1. The client application submits a proposal request to invoke a chaincode function (reading or writing private data)
to endorsing peers which are part of authorized organizations of the collection. The private data, or data used to
generate private data in chaincode, is sent in a transient field of the proposal.
2. The endorsing peers simulate the transaction and store the private data in a transient data store (a
temporary storage local to the peer). They distribute the private data, based on the collection policy, to authorized
peers via gossip.
3. The endorsing peer sends the proposal response back to the client with public data, including a hash of the
private data key and value. No private data is sent back to the client. For more information on how endorsement
works with private data, click here.
4. The client application submits the transaction to the ordering service (with hashes of the private data) which gets
distributed into blocks as normal. The block with the hashed values is distributed to all the peers. In this way,
all peers on the channel can validate transactions with the hashes of the private data in a consistent way, without
knowing the actual private data.
5. At block-committal time, authorized peers use the collection policy to determine if they are authorized to have
access to the private data. If they do, they will first check their local transient data store to determine
if they have already received the private data at chaincode endorsement time. If not, they will attempt to pull
the private data from another peer. Then they will validate the private data against the hashes in the public block
and commit the transaction and the block. Upon validation/commit, the private data is moved to their copy of
the private state database and private writeset storage. The private data is then deleted from the transient
data store.
For more details on collection definitions, and other low level information about private data and collections, refer to
the private data reference topic.
For very sensitive data, even the parties sharing the private data might want — or might be required by government
regulations — to “purge” the data stored on their peers after a set amount of time, leaving behind only a hash of the
data to serve as immutable evidence of the transaction.
In some of these cases, the private data only needs to exist on the peer’s private database until it can be replicated into
a database external to the blockchain network. The data might also only need to exist on the peers until a chaincode
business process is done with it (trade settled, contract fulfilled, etc). To support the later use case, it is possible to
purge private data if it has not been modified once a set number of subsequent blocks have been added to the private
database.
4.9 Ledger
A ledger contains the current state of a business as a journal of transactions. The earliest European and Chinese ledgers
date from almost 1000 years ago, and the Sumerians had stone ledgers 4000 years ago – but let’s start with a more
up-to-date example!
You’re probably used to looking at your bank account. What’s most important to you is the available balance – it’s
what you’re able to spend at the current moment in time. If you want to see how your balance was derived, then you
can look through the transaction credits and debits that determined it. This is a real life example of a ledger – a state
(your bank balance), and a set of ordered transactions (credits and debits) that determine it. Hyperledger Fabric is
motivated by these same two concerns – to present the current value of a set of ledger states, and to capture the history
of the transactions that determined these states.
A ledger doesn’t literally store business objects – instead it stores facts about those objects. When we say “we store a
business object in a ledger” what we really mean is that we’re recording the facts about the current state of an object,
and the facts about the history of transactions that led to the current state. In an increasingly digital world, it can feel
like we’re looking at an object, rather than facts about an object. In the case of a digital object, it’s likely that it lives in
an external datastore; the facts we store in the ledger allow us to identify its location along with other key information
about it.
While the facts about the current state of a business object may change, the history of facts about it is immutable,
it can be added to, but it cannot be retrospectively changed. We’re going to see how thinking of a blockchain as an
immutable history of facts about business objects is a simple yet powerful way to understand it.
Let’s now take a closer look at the Hyperledger Fabric ledger structure!
In Hyperledger Fabric, a ledger consists of two distinct, though related, parts – a world state and a blockchain. Each
of these represents a set of facts about a set of business objects.
Firstly, there’s a world state – a database that holds a cache of the current values of a set of ledger states. The world
state makes it easy for a program to directly access the current value of a state rather than having to calculate it by
4.9. Ledger 71
hyperledger-fabricdocs Documentation, Release master
traversing the entire transaction log. Ledger states are, by default, expressed as key-value pairs, and we’ll see later
how Hyperledger Fabric provides flexibility in this regard. The world state can change frequently, as states can be
created, updated and deleted.
Secondly, there’s a blockchain – a transaction log that records all the changes that have resulted in the current the
world state. Transactions are collected inside blocks that are appended to the blockchain – enabling you to understand
the history of changes that have resulted in the current world state. The blockchain data structure is very different to
the world state because once written, it cannot be modified; it is immutable.
A Ledger L comprises blockchain B and world state W, where blockchain B determines world state W. We can also say
that world state W is derived from blockchain B.
It’s helpful to think of there being one logical ledger in a Hyperledger Fabric network. In reality, the network maintains
multiple copies of a ledger – which are kept consistent with every other copy through a process called consensus. The
term Distributed Ledger Technology (DLT) is often associated with this kind of ledger – one that is logically singular,
but has many consistent copies distributed throughout a network.
Let’s now examine the world state and blockchain data structures in more detail.
The world state holds the current value of the attributes of a business object as a unique ledger state. That’s use-
ful because programs usually require the current value of an object; it would be cumbersome to traverse the entire
blockchain to calculate an object’s current value – you just get it directly from the world state.
A ledger world state containing two states. The first state is: key=CAR1 and value=Audi. The second state has a more
complex value: key=CAR2 and value={model:BMW, color=red, owner=Jane}. Both states are at version 0.
A ledger state records a set of facts about a particular business object. Our example shows ledger states for two cars,
CAR1 and CAR2, each having a key and a value. An application program can invoke a smart contract which uses
simple ledger APIs to get, put and delete states. Notice how a state value can be simple (Audi. . . ) or compound
(type:BMW. . . ). The world state is often queried to retrieve objects with certain attributes, for example to find all red
BMWs.
The world state is implemented as a database. This makes a lot of sense because a database provides a rich set of
operators for the efficient storage and retrieval of states. We’ll see later that Hyperledger Fabric can be configured
to use different world state databases to address the needs of different types of state values and the access patterns
required by applications, for example in complex queries.
Applications submit transactions which capture changes to the world state, and these transactions end up being com-
mitted to the ledger blockchain. Applications are insulated from the details of this consensus mechanism by the
Hyperledger Fabric SDK; they merely invoke a smart contract, and are notified when the transaction has been in-
cluded in the blockchain (whether valid or invalid). The key design point is that only transactions that are signed by
the required set of endorsing organizations will result in an update to the world state. If a transaction is not signed by
sufficient endorsers, it will not result in a change of world state. You can read more about how applications use smart
contracts, and how to develop applications.
You’ll also notice that a state has an version number, and in the diagram above, states CAR1 and CAR2 are at their
starting versions, 0. The version number for internal use by Hyperledger Fabric, and is incremented every time the state
changes. The version is checked whenever the state is updated to make sure the current states matches the version at
the time of endorsement. This ensures that the world state is changing as expected; that there has not been a concurrent
update.
Finally, when a ledger is first created, the world state is empty. Because any transaction which represents a valid change
to world state is recorded on the blockchain, it means that the world state can be re-generated from the blockchain
at any time. This can be very convenient – for example, the world state is automatically generated when a peer is
created. Moreover, if a peer fails abnormally, the world state can be regenerated on peer restart, before transactions
are accepted.
4.9.5 Blockchain
Let’s now turn our attention from the world state to the blockchain. Whereas the world state contains a set of facts
relating to the current state of a set of business objects, the blockchain is an historical record of the facts about how
4.9. Ledger 73
hyperledger-fabricdocs Documentation, Release master
these objects arrived at their current states. The blockchain has recorded every previous version of each ledger state
and how it has been changed.
The blockchain is structured as sequential log of interlinked blocks, where each block contains a sequence of transac-
tions, each transaction representing a query or update to the world state. The exact mechanism by which transactions
are ordered is discussed elsewhere; what’s important is that block sequencing, as well as transaction sequencing within
blocks, is established when blocks are first created by a Hyperledger Fabric component called the ordering service.
Each block’s header includes a hash of the block’s transactions, as well a copy of the hash of the prior block’s header.
In this way, all transactions on the ledger are sequenced and cryptographically linked together. This hashing and
linking makes the ledger data very secure. Even if one node hosting the ledger was tampered with, it would not be
able to convince all the other nodes that it has the ‘correct’ blockchain because the ledger is distributed throughout a
network of independent nodes.
The blockchain is always implemented as a file, in contrast to the world state, which uses a database. This is a sensible
design choice as the blockchain data structure is heavily biased towards a very small set of simple operations. Ap-
pending to the end of the blockchain is the primary operation, and query is currently a relatively infrequent operation.
Let’s have a look at the structure of a blockchain in a little more detail.
A blockchain B containing blocks B0, B1, B2, B3. B0 is the first block in the blockchain, the genesis block.
In the above diagram, we can see that block B2 has a block data D2 which contains all its transactions: T5, T6, T7.
Most importantly, B2 has a block header H2, which contains a cryptographic hash of all the transactions in D2 as
well as with the equivalent hash from the previous block B1. In this way, blocks are inextricably and immutably linked
to each other, which the term blockchain so neatly captures!
Finally, as you can see in the diagram, the first block in the blockchain is called the genesis block. It’s the starting
point for the ledger, though it does not contain any user transactions. Instead, it contains a configuration transaction
containing the initial state of the network channel (not shown). We discuss the genesis block in more detail when we
discuss the blockchain network and channels in the documentation.
4.9.6 Blocks
Let’s have a closer look at the structure of a block. It consists of three sections
• Block Header
This section comprises three fields, written when a block is created.
– Block number: An integer starting at 0 (the genesis block), and increased by 1 for every new block
appended to the blockchain.
– Current Block Hash: The hash of all the transactions contained in the current block.
– Previous Block Hash: A copy of the hash from the previous block in the blockchain.
These fields are internally derived by cryptographically hashing the block data. They ensure that each and every
block is inextricably linked to its neighbour, leading to an immutable ledger.
Block header details. The header H2 of block B2 consists of block number 2, the hash CH2 of the current block
data D2, and a copy of a hash PH1 from the previous block, block number 1.
• Block Data
This section contains a list of transactions arranged in order. It is written when the block is created by the
ordering service. These transactions have a rich but straightforward structure, which we describe later in this
topic.
• Block Metadata
This section contains the time when the block was written, as well as the certificate, public key and signature
of the block writer. Subsequently, the block committer also adds a valid/invalid indicator for every transaction,
though this information is not included in the hash, as that is created when the block is created.
4.9.7 Transactions
As we’ve seen, a transaction captures changes to the world state. Let’s have a look at the detailed blockdata structure
which contains the transactions in a block.
4.9. Ledger 75
hyperledger-fabricdocs Documentation, Release master
Transaction details. Transaction T4 in blockdata D1 of block B1 consists of transaction header, H4, a transaction
signature, S4, a transaction proposal P4, a transaction response, R4, and a list of endorsements, E4.
In the above example, we can see the following fields:
• Header
This section, illustrated by H4, captures some essential metadata about the transaction – for example, the name
of the relevant chaincode, and its version.
• Signature
This section, illustrated by S4, contains a cryptographic signature, created by the client application. This field
is used to check that the transaction details have not been tampered with, as it requires the application’s private
key to generate it.
• Proposal
This field, illustrated by P4, encodes the input parameters supplied by an application to the smart contract
which creates the proposed ledger update. When the smart contract runs, this proposal provides a set of input
parameters, which, in combination with the current world state, determines the new world state.
• Response
This section, illustrated by R4, captures the before and after values of the world state, as a Read Write set
(RW-set). It’s the output of a smart contract, and if the transaction is successfully validated, it will be applied to
the ledger to update the world state.
• Endorsements
As shown in E4, this is a list of signed transaction responses from each required organization sufficient to satisfy
the endorsement policy. You’ll notice that, whereas only one transaction response is included in the transaction,
there are multiple endorsements. That’s because each endorsement effectively encodes its organization’s partic-
ular transaction response – meaning that there’s no need to include any transaction response that doesn’t match
sufficient endorsements as it will be rejected as invalid, and not update the world state.
That concludes the major fields of the transaction – there are others, but these are the essential ones that you need to
understand to have a solid understanding of the ledger data structure.
The world state is physically implemented as a database, to provide simple and efficient storage and retrieval of ledger
states. As we’ve seen, ledger states can have simple or compound values, and to accommodate this, the world state
database implementation can vary, allowing these values to be efficiently implemented. Options for the world state
database currently include LevelDB and CouchDB.
LevelDB is the default and is particularly appropriate when ledger states are simple key-value pairs. A LevelDB
database is closely co-located with a network node – it is embedded within the same operating system process.
CouchDB is a particularly appropriate choice when ledger states are structured as JSON documents because CouchDB
supports the rich queries and update of richer data types often found in business transactions. Implementation-wise,
CouchDB runs in a separate operating system process, but there is still a 1:1 relation between a peer node and a
CouchDB instance. All of this is invisible to a smart contract. See CouchDB as the StateDatabase for more information
on CouchDB.
In LevelDB and CouchDB, we see an important aspect of Hyperledger Fabric – it is pluggable. The world state
database could be a relational data store, or a graph store, or a temporal database. This provides great flexibility in the
types of ledger states that can be efficiently accessed, allowing Hyperledger Fabric to address many different types of
problems.
As we end this topic on the ledger, let’s have a look at a sample ledger. If you’ve run the fabcar sample application,
then you’ve created this ledger.
The fabcar sample app creates a set of 10 cars each with a unique identity; a different color, make, model and owner.
Here’s what the ledger looks like after the first four cars have been created.
The ledger, L, comprises a world state, W and a blockchain, B. W contains four states with keys: CAR1, CAR2, CAR3
and CAR4. B contains two blocks, 0 and 1. Block 1 contains four transactions: T1, T2, T3, T4.
We can see that the world state contains states that correspond to CAR0, CAR1, CAR2 and CAR3. CAR0 has a value
which indicates that it is a blue Toyota Prius, currently owned by Tomoko, and we can see similar states and values
for the other cars. Moreover, we can see that all car states are at version number 0, indicating that this is their starting
version number – they have not been updated since they were created.
We can also see that the blockchain contains two blocks. Block 0 is the genesis block, though it does not contain
any transactions that relate to cars. Block 1 however, contains transactions T1, T2, T3, T4 and these correspond to
4.9. Ledger 77
hyperledger-fabricdocs Documentation, Release master
transactions that created the initial states for CAR0 to CAR3 in the world state. We can see that block 1 is linked to
block 0.
We have not shown the other fields in the blocks or transactions, specifically headers and hashes. If you’re interested
in the precise details of these, you will find a dedicated reference topic elsewhere in the documentation. It gives you
a fully worked example of an entire block with its transactions in glorious detail – but for now, you have achieved a
solid conceptual understanding of a Hyperledger Fabric ledger. Well done!
4.9.10 Namespaces
Even though we have presented the ledger as though it were a single world state and single blockchain, that’s a little bit
of an over-simplification. In reality, each chaincode has its own world state that is separate from all other chaincodes.
World states are in a namespace so that only smart contracts within the same chaincode can access a given namespace.
A blockchain is not namespaced. It contains transactions from many different smart contract namespaces. You can
read more about chaincode namespaces in this topic.
Let’s now look at how the concept of a namespace is applied within a Hyperledger Fabric channel.
4.9.11 Channels
In Hyperledger Fabric, each channel has a completely separate ledger. This means a completely separate blockchain,
and completely separate world states, including namespaces. It is possible for applications and smart contracts to
communicate between channels so that ledger information can be accessed between them.
You can read more about how ledgers work with channels in this topic.
See the Transaction Flow, Read-Write set semantics and CouchDB as the StateDatabase topics for a deeper dive on
transaction flow, concurrency control, and the world state database.
Before businesses can transact with each other, they must define a common set of contracts covering common terms,
data, rules, concept definitions, and processes. Taken together, these contracts lay out the business model that govern
all of the interactions between transacting parties.
A smart contract defines the rules between different organizations in executable code. Applications invoke a smart
contract to generate transactions that are recorded on the ledger.
Using a blockchain network, we can turn these contracts into executable programs – known in the industry as smart
contracts – to open up a wide variety of new possibilities. That’s because a smart contract can implement the gov-
ernance rules for any type of business object, so that they can be automatically enforced when the smart contract is
executed. For example, a smart contract might ensure that a new car delivery is made within a specified timeframe,
or that funds are released according to prearranged terms, improving the flow of goods or capital respectively. Most
importantly however, the execution of a smart contract is much more efficient than a manual human business process.
In the diagram above, we can see how two organizations, ORG1 and ORG2 have defined a car smart contract to
query, transfer and update cars. Applications from these organizations invoke this smart contract to perform
an agreed step in a business process, for example to transfer ownership of a specific car from ORG1 to ORG2.
4.10.2 Terminology
Hyperledger Fabric users often use the terms smart contract and chaincode interchangeably. In general, a smart
contract defines the transaction logic that controls the lifecycle of a business object contained in the world state. It is
then packaged into a chaincode which is then deployed to a blockchain network. Think of smart contracts as governing
transactions, whereas chaincode governs how smart contracts are packaged for deployment.
A smart contract is defined within a chaincode. Multiple smart contracts can be defined within the same chaincode.
When a chaincode is deployed, all smart contracts within it are made available to applications.
In the diagram, we can see a vehicle chaincode that contains three smart contracts: cars, boats and trucks. We
can also see an insurance chaincode that contains four smart contracts: policy, liability, syndication
and securitization. In both cases these contracts cover key aspects of the business process relating to vehicles
and insurance. In this topic, we will use the car contract as an example. We can see that a smart contract is a domain
specific program which relates to specific business processes, whereas a chaincode is a technical container of a group
of related smart contracts for installation and instantiation.
4.10.3 Ledger
At the simplest level, a blockchain immutably records transactions which update states in a ledger. A smart contract
programmatically accesses two distinct pieces of the ledger – a blockchain, which immutably records the history of
all transactions, and a world state that holds a cache of the current value of these states, as it’s the current value of an
object that is usually required.
Smart contracts primarily put, get and delete states in the world state, and can also query the immutable blockchain
record of transactions.
• A get typically represents a query to retrieve information about the current state of a business object.
• A put typically creates a new business object or modifies an existing one in the ledger world state.
• A delete typically represents the removal of a business object from the current state of the ledger, but not its
history.
Smart contracts have many APIs available to them. Critically, in all cases, whether transactions create, read, update or
delete business objects in the world state, the blockchain contains an immutable record of these changes.
4.10.4 Development
Smart contracts are the focus of application development, and as we’ve seen, one or more smart contracts can be
defined within a single chaincode. Deploying a chaincode to a network makes all its smart contracts available to the
organizations in that network. It means that only administrators need to worry about chaincode; everyone else can
think in terms of smart contracts.
At the heart of a smart contract is a set of transaction definitions. For example, look at fabcar.js, where you
can see a smart contract transaction that creates a new car:
const car = {
color,
docType: 'car',
make,
model,
owner,
};
You can learn more about the Fabcar smart contract in the Writing your first application tutorial.
A smart contract can describe an almost infinite array of business use cases relating to immutability of data in multi-
organizational decision making. The job of a smart contract developer is to take an existing business process that
might govern financial prices or delivery conditions, and express it as a smart contract in a programming language
such as JavaScript, GOLANG or Java. The legal and technical skills required to convert centuries of legal language
into programming language is increasingly practiced by smart contract auditors. You can learn about how to design
and develop a smart contract in the Developing applications topic.
4.10.5 Endorsement
Associated with every chaincode is an endorsement policy that applies to all of the smart contracts defined within
it. An endorsement policy is very important; it indicates which organizations in a blockchain network must sign a
transaction generated by a given smart contract in order for that transaction to be declared valid.
Every smart contract has an endorsement policy associated with it. This endorsement policy identifies which orga-
nizations must approve transactions generated by the smart contract before those transactions can be identified as
valid.
An example endorsement policy might define that three of the four organizations participating in a blockchain net-
work must sign a transaction before it is considered valid. All transactions, whether valid or invalid are added to a
distributed ledger, but only valid transactions update the world state.
If an endorsement policy specifies that more than one organization must sign a transaction, then the smart contract
must be executed by a sufficient set of organizations in order for a valid transaction to be generated. In the example
above, a smart contract transaction to transfer a car would need to be executed and signed by both ORG1 and
ORG2 for it to be valid.
Endorsement policies are what make Hyperledger Fabric different to other blockchains like Ethereum or Bitcoin. In
these systems valid transactions can be generated by any node in the network. Hyperledger Fabric more realistically
models the real world; transactions must be validated by trusted organizations in a network. For example, a government
organization must sign a valid issueIdentity transaction, or both the buyer and seller of a car must sign a
car transfer transaction. Endorsement policies are designed to allow Hyperledger Fabric to better model these types
of real-world interactions.
Finally, endorsement policies are just one example of policy in Hyperledger Fabric. Other policies can be defined
to identify who can query or update the ledger, or add or remove participants from the network. In general, policies
should be agreed in advance by the consortium of organizations in a blockchain network, although they are not set
in stone. Indeed, policies themselves can define the rules by which they can be changed. And although an advanced
topic, it is also possible to define custom endorsement policy rules over and above those provided by Fabric.
When a smart contract executes, it runs on a peer node owned by an organization in the blockchain network. The
contract takes a set of input parameters called the transaction proposal and uses them in combination with its program
logic to read and write the ledger. Changes to the world state are captured as a transaction proposal response (or just
transaction response) which contains a read-write set with both the states that have been read, and the new states
that are to be written if the transaction is valid. Notice that the world state is not updated when the smart contract
is executed!
All transactions have an identifier, a proposal, and a response signed by a set of organizations. All transactions are
recorded on the blockchain, whether valid or invalid, but only valid transactions contribute to the world state.
Examine the car transfer transaction. You can see a transaction t3 for a car transfer between ORG1 and
ORG2. See how the transaction has input {CAR1, ORG1, ORG2} and output {CAR1.owner=ORG1, CAR1.
owner=ORG2}, representing the change of owner from ORG1 to ORG2. Notice how the input is signed by the
application’s organization ORG1, and the output is signed by both organizations identified by the endorsement policy,
ORG1 and ORG2. These signatures were generated by using each actor’s private key, and mean that anyone in the
network can verify that all actors in the network are in agreement about the transaction details.
A transaction that is distributed to all peer nodes in the network is validated in two phases. Firstly, the transaction
is checked to ensure it has been signed by sufficient organizations according to the endorsement policy. Secondly,
it is checked to ensure that the current value of the world state matches the read set of the transaction when it was
signed by the endorsing peer nodes; that there has been no intermediate update. If a transaction passes both these tests,
it is marked as valid. All transactions are added to the blockchain history, whether valid or invalid, but only valid
transactions result in an update to the world state.
In our example, t3 is a valid transaction, so the owner of CAR1 has been updated to ORG2. However, t4 (not shown)
is an invalid transaction, so while it was recorded in the ledger, the world state was not updated, and CAR2 remains
owned by ORG2.
Finally, to understand how to use a smart contract or chaincode with world state, read the chaincode namespace topic.
4.10.7 Channels
Hyperledger Fabric allows an organization to simultaneously participate in multiple, separate blockchain networks
via channels. By joining multiple channels, an organization can participate in a so-called network of networks.
Channels provide an efficient sharing of infrastructure while maintaining data and communications privacy. They
are independent enough to help organizations separate their work traffic with different counterparties, but integrated
enough to allow them to coordinate independent activities when necessary.
A channel provides a completely separate communication mechanism between a set of organizations. When a chain-
code is instantiated on a channel, an endorsement policy is defined for it; all the smart contracts within the chaincode
are made available to the applications on that channel.
An administrator defines an endorsement policy for a chaincode when it is instantiated on a channel, and can change
it when the chaincode is upgraded. The endorsement policy applies equally to all smart contracts defined within the
same chaincode deployed to a channel. It also means that a single smart contract can be deployed to different channels
with different endorsement policies.
In the example above, the car contract is deployed to the VEHICLE channel, and an insurance contract is de-
ployed to the INSURANCE channel. The car contract has an endorsement policy that requires ORG1 and ORG2 to
sign transactions before they are considered valid, whereas the insurance contract has an endorsement policy that
only requires ORG3 to sign valid transactions. ORG1 participates in two networks, the VEHICLE channel and the
INSURANCE network, and can coordinate activity across these two networks with ORG2 and ORG3 respectively.
4.10.8 Intercommunication
Smart Contracts are able to call to other smart contracts both within the same channel and across different channels. It
this way, they can read and write world state data to which they would not otherwise have access due to smart contract
namespaces.
There are limitations to this inter-contract communication, which are described fully in the chaincode namespace
topic.
The smart contracts defined within a chaincode encode the domain dependent rules for a business process agreed
between a set of blockchain organizations. However, a chaincode can also define low-level program code which
corresponds to domain independent system interactions, unrelated to these smart contracts for business processes.
The following are the different types of system chaincodes and their associated abbreviations:
• Lifecycle system chaincode (LSCC) runs in all peers to handle package signing, install, instantiate, and upgrade
chaincode requests. You can read more about the LSCC implements this process.
• Configuration system chaincode (CSCC) runs in all peers to handle changes to a channel configuration, such as
a policy update. You can read more about this process in the following chaincode topic.
• Query system chaincode (QSCC) runs in all peers to provide ledger APIs which include block query, transaction
query etc. You can read more about these ledger APIs in the transaction context topic.
• Endorsement system chaincode (ESCC) runs in endorsing peers to cryptographically sign a transaction response.
You can read more about how the ESCC implements this process.
• Validation system chaincode (VSCC) validates a transaction, including checking endorsement policy and read-
write set versioning. You can read more about the LSCC implements this process.
It is possible for low level Fabric developers and administrators to modify these system chaincodes for their own uses.
However, the development and management of system chaincodes is a specialized activity, quite separate from the
development of smart contracts, and is not normally necessary. Changes to system chaincodes must be handled with
extreme care as they are fundamental to the correct functioning of a Hyperledger Fabric network. For example, if
a system chaincode is not developed correctly, one peer node may update its copy of the world state or blockchain
differently to another peer node. This lack of of consensus is one form of a ledger fork, a very undesirable situation.
The Hyperledger Requirements WG is documenting a number of blockchain use cases and maintaining an inventory
here.
Getting Started
5.1 Prerequisites
Before we begin, if you haven’t already done so, you may wish to check that you have all the prerequisites below
installed on the platform(s) on which you’ll be developing blockchain applications and/or operating Hyperledger
Fabric.
Download the latest version of the cURL tool if it is not already installed or if you get errors running the curl commands
from the documentation.
Note: If you’re on Windows please see the specific note on Windows extras below.
You will need the following installed on the platform on which you will be operating, or developing on (or for),
Hyperledger Fabric:
• MacOSX, *nix, or Windows 10: Docker Docker version 17.06.2-ce or greater is required.
• Older versions of Windows: Docker Toolbox - again, Docker version Docker 17.06.2-ce or greater is required.
You can check the version of Docker you have installed with the following command from a terminal prompt:
docker --version
Note: Installing Docker for Mac or Windows, or Docker Toolbox will also install Docker Compose. If you already
had Docker installed, you should check that you have Docker Compose version 1.14.0 or greater installed. If not, we
85
hyperledger-fabricdocs Documentation, Release master
You can check the version of Docker Compose you have installed with the following command from a terminal prompt:
docker-compose --version
Hyperledger Fabric uses the Go Programming Language for many of its components.
• Go version 1.11.x is required.
Given that we will be writing chaincode programs in Go, there are two environment variables you will need to set
properly; you can make these settings permanent by placing them in the appropriate startup file, such as your personal
~/.bashrc file if you are using the bash shell under Linux.
First, you must set the environment variable GOPATH to point at the Go workspace containing the downloaded Fabric
code base, with something like:
export GOPATH=$HOME/go
Second, you should (again, in the appropriate startup file) extend your command search path to include the Go bin
directory, such as the following example for bash under Linux:
export PATH=$PATH:$GOPATH/bin
While this directory may not exist in a new Go workspace installation, it is populated later by the Fabric build system
with a small number of Go executables used by other parts of the build system. So even if you currently have no such
directory yet, extend your shell search path as above.
If you will be developing applications for Hyperledger Fabric leveraging the Hyperledger Fabric SDK for Node.js,
you will need to have version 8.9.x of Node.js installed.
Note: Versions other than the 8.x series are not supported at this time.
• Node.js - version 8.x
Note: Installing Node.js will also install NPM, however it is recommended that you confirm the version of NPM
installed. You can upgrade the npm tool with the following command:
Python
By default Ubuntu 16.04 comes with Python 3.5.1 installed as the python3 binary. The Fabric Node.js SDK requires
an iteration of Python 2.7 in order for npm install operations to complete successfully. Retrieve the 2.7 version
with the following command:
python --version
If you are developing on Windows 7, you will want to work within the Docker Quickstart Terminal which uses Git
Bash and provides a better alternative to the built-in Windows shell.
However experience has shown this to be a poor development environment with limited functionality. It is suitable
to run Docker based scenarios, such as Getting Started, but you may have difficulties with operations involving the
make and docker commands.
On Windows 10 you should use the native Docker distribution and you may use the Windows PowerShell. However,
for the binaries command to succeed you will still need to have the uname command available. You can get it as
part of Git but beware that only the 64bit version is supported.
Before running any git clone commands, run the following commands:
You can check the setting of these parameters with the following commands:
5.1. Prerequisites 87
hyperledger-fabricdocs Documentation, Release master
Your environment should now be ready to go through the Getting Started samples and tutorials.
Note: If you have questions not addressed by this documentation, or run into issues with any of the tutorials, please
visit the Still Have Questions? page for some tips on where to find additional help.
While we work on developing real installers for the Hyperledger Fabric binaries, we provide a script that will download
and install samples and binaries to your system. We think that you’ll find the sample applications installed useful to
learn more about the capabilities and operations of Hyperledger Fabric.
Note: If you are running on Windows you will want to make use of the Docker Quickstart Terminal for the upcoming
terminal commands. Please visit the Prerequisites if you haven’t previously installed it.
If you are using Docker Toolbox on Windows 7 or macOS, you will need to use a location under C:\Users (Windows
7) or /Users (macOS) when installing and running the samples.
If you are using Docker for Mac, you will need to use a location under /Users, /Volumes, /private, or /tmp.
To use a different location, please consult the Docker documentation for file sharing.
If you are using Docker for Windows, please consult the Docker documentation for shared drives and use a location
under one of the shared drives.
Determine a location on your machine where you want to place the fabric-samples repository and enter that directory
in a terminal window. The command that follows will perform the following steps:
1. If needed, clone the hyperledger/fabric-samples repository
2. Checkout the appropriate version tag
3. Install the Hyperledger Fabric platform-specific binaries and config files for the version specified into the /bin
and /config directories of fabric-samples
4. Download the Hyperledger Fabric docker images for the version specified
Once you are ready, and in the directory into which you will install the Fabric Samples and binaries, go ahead and
execute the following command:
Note: If you want to download different versions for Fabric, Fabric-ca and thirdparty Docker images, you must pass
the version identifier for each.
Note: If you get an error running the above curl command, you may have too old a version of curl that does not
handle redirects or an unsupported environment.
Please visit the Prerequisites page for additional information on where to find the latest version of curl and get the right
environment. Alternately, you can substitute the un-shortened URL: https://raw.githubusercontent.com/hyperledger/
fabric/master/scripts/bootstrap.sh
Note: You can use the command above for any published version of Hyperledger Fabric. Simply replace 1.4.0 with
the version identifier of the version you wish to install.
The command above downloads and executes a bash script that will download and extract all of the platform-specific
binaries you will need to set up your network and place them into the cloned repo you created above. It retrieves the
following platform-specific binaries:
• configtxgen,
• configtxlator,
• cryptogen,
• discover,
• idemixgen
• orderer,
• peer, and
• fabric-ca-client
and places them in the bin sub-directory of the current working directory.
You may want to add that to your PATH environment variable so that these can be picked up without fully qualifying
the path to each binary. e.g.:
Finally, the script will download the Hyperledger Fabric docker images from Docker Hub into your local Docker
registry and tag them as ‘latest’.
The script lists out the Docker images installed upon conclusion.
Look at the names for each image; these are the components that will ultimately comprise our Hyperledger Fabric
network. You will also notice that you have two instances of the same image ID - one tagged as “amd64-1.x.x” and
one tagged as “latest”. Prior to 1.2.0, the image being downloaded was determined by uname -m and showed as
“x86_64-1.x.x”.
Note: On different architectures, the x86_64/amd64 would be replaced with the string identifying your architecture.
Note: If you have questions not addressed by this documentation, or run into issues with any of the tutorials, please
visit the Still Have Questions? page for some tips on where to find additional help.
Before we begin, if you haven’t already done so, you may wish to check that you have all the Prerequisites installed
on the platform(s) on which you’ll be developing blockchain applications and/or operating Hyperledger Fabric.
Once you have the prerequisites installed, you are ready to download and install HyperLedger Fabric. While we work
on developing real installers for the Fabric binaries, we provide a script that will Install Samples, Binaries and Docker
Images to your system. The script also will download the Docker images to your local registry.
Hyperledger Fabric offers a number of SDKs to support various programming languages. There are two officially
released SDKs for Node.js and Java:
• Hyperledger Fabric Node SDK and Node SDK documentation.
• Hyperledger Fabric Java SDK.
In addition, there are three more SDKs that have not yet been officially released (for Python, Go and REST), but they
are still available for downloading and testing:
• Hyperledger Fabric Python SDK.
• Hyperledger Fabric Go SDK.
• Hyperledger Fabric REST SDK.
Hyperledger Fabric provides an optional certificate authority service that you may choose to use to generate the
certificates and key material to configure and manage identity in your blockchain network. However, any CA that
can generate ECDSA certificates may be used.
Developing Applications
PaperNet is a commercial paper network that allows suitably authorized participants to issue, trade, redeem and rate
commercial paper.
91
hyperledger-fabricdocs Documentation, Release master
The PaperNet commercial paper network. Six organizations currently use PaperNet network to issue, buy, sell, redeem
and rate commercial paper. MagentoCorp issues and redeems commercial paper. DigiBank, BigFund, BrokerHouse
and HedgeMatic all trade commercial paper with each other. RateM provides various measures of risk for commercial
paper.
Let’s see how MagnetoCorp uses PaperNet and commercial paper to help its business.
MagnetoCorp is a well-respected company that makes self-driving electric vehicles. In early April 2020, MagnetoCorp
won a large order to manufacture 10,000 Model D cars for Daintree, a new entrant in the personal transport market.
Although the order represents a significant win for MagnetoCorp, Daintree will not have to pay for the vehicles until
they start to be delivered on November 1, six months after the deal was formally agreed between MagnetoCorp and
Daintree.
To manufacture the vehicles, MagnetoCorp will need to hire 1000 workers for at least 6 months. This puts a short term
strain on its finances – it will require an extra 5M USD each month to pay these new employees. Commercial paper
is designed to help MagnetoCorp overcome its short term financing needs – to meet payroll every month based on the
expectation that it will be cash rich when Daintree starts to pay for its new Model D cars.
At the end of May, MagnetoCorp needs 5M USD to meet payroll for the extra workers it hired on May 1. To do this, it
issues a commercial paper with a face value of 5M USD with a maturity date 6 months in the future – when it expects
to see cash flow from Daintree. DigiBank thinks that MagnetoCorp is creditworthy, and therefore doesn’t require much
of a premium above the central bank base rate of 2%, which would value 4.95M USD today at 5M USD in 6 months
time. It therefore purchases the MagnetoCorp 6 month commercial paper for 4.94M USD – a slight discount compared
to the 4.95M USD it is worth. DigiBank fully expects that it will be able to redeem 5M USD from MagnetoCorp in
6 months time, making it a profit of 10K USD for bearing the increased risk associated with this commercial paper.
This extra 10K means it receives a 2.4% return on investment – significantly better than the risk free return of 2%.
At the end of June, when MagnetoCorp issues a new commercial paper for 5M USD to meet June’s payroll, it is
purchased by BigFund for 4.94M USD. That’s because the commercial conditions are roughly the same in June as
they are in May, resulting in BigFund valuing MagnetoCorp commercial paper at the same price that DigiBank did in
May.
Each subsequent month, MagnetoCorp can issue new commercial paper to meet its payroll obligations, and these may
be purchased by DigiBank, or any other participant in the PaperNet commercial paper network – BigFund, HedgeMatic
or BrokerHouse. These organizations may pay more or less for the commercial paper depending on two factors – the
central bank base rate, and the risk associated with MagnetoCorp. This latter figure depends on a variety of factors
such as the production of Model D cars, and the creditworthiness of MagnetoCorp as assessed by RateM, a ratings
agency.
The organizations in PaperNet have different roles, MagnetoCorp issues paper, DigiBank, BigFund, HedgeMatic
and BrokerHouse trade paper and RateM rates paper. Organizations of the same role, such as DigiBank, Bigfund,
HedgeMatic and BrokerHouse are competitors. Organizations of different roles are not necessarily competitors, yet
might still have opposing business interest, for example MagentoCorp will desire a high rating for its papers to sell
them at a high price, while DigiBank would benefit from a low rating, such that it can buy them at a low price. As
can be seen, even a seemingly simple network such as PaperNet can have complex trust relationships. A blockchain
can help establish trust among organizations that are competitors or have opposing business interests that might lead
to disputes. Fabric in particular has the means to capture even fine-grained trust relationships.
Let’s pause the MagnetoCorp story for a moment, and develop the client applications and smart contracts that Pa-
perNet uses to issue, buy, sell and redeem commercial paper as well as capture the trust relationships between the
organizations. We’ll come back to the role of the rating agency, RateM, a little later.
6.2 Analysis
A paper 00001 is issued by MagnetoCorp on May 31. Spend a few moments looking at the first state of this paper,
with its different properties and values:
Issuer = MagnetoCorp
Paper = 00001
Owner = MagnetoCorp
Issue date = 31 May 2020
Maturity = 30 November 2020
Face value = 5M USD
Current state = issued
This paper state is a result of the issue transaction and it brings MagnetoCorp’s first commercial paper into existence!
Notice how this paper has a 5M USD face value for redemption later in the year. See how the Issuer and Owner are
the same when paper 00001 is issued. Notice that this paper could be uniquely identified as MagnetoCorp00001 –
a composition of the Issuer and Paper properties. Finally, see how the property Current state = issued
quickly identifies the stage of MagnetoCorp paper 00001 in its lifecycle.
Shortly after issuance, the paper is bought by DigiBank. Spend a few moments looking at how the same commercial
paper has changed as a result of this buy transaction:
Issuer = MagnetoCorp
Paper = 00001
Owner = DigiBank
Issue date = 31 May 2020
Maturity date = 30 November 2020
(continues on next page)
6.2. Analysis 93
hyperledger-fabricdocs Documentation, Release master
The most significant change is that of Owner – see how the paper initially owned by MagnetoCorp is now owned
by DigiBank. We could imagine how the paper might be subsequently sold to BrokerHouse or HedgeMatic, and
the corresponding change to Owner. Note how Current state allow us to easily identify that the paper is now
trading.
After 6 months, if DigiBank still holds the the commercial paper, it can redeem it with MagnetoCorp:
Issuer = MagnetoCorp
Paper = 00001
Owner = MagnetoCorp
Issue date = 31 May 2020
Maturity date = 30 November 2020
Face value = 5M USD
Current state = redeemed
This final redeem transaction has ended the commercial paper’s lifecycle – it can be considered closed. It is often
mandatory to keep a record of redeemed commercial papers, and the redeemed state allows us to quickly identify
these. The value of Owner of a paper can be used to perform access control on the redeem transaction, by comparing
the Owner against the identity of the transaction creator. Fabric supports this through the getCreator() chaincode
API. If golang is used as a chaincode language, the client identity chaincode library can be used to retrieve additional
attributes of the transaction creator.
6.2.2 Transactions
We’ve seen that paper 00001’s lifecycle is relatively straightforward – it moves between issued, trading and
redeemed as a result of an issue, buy, or redeem transaction.
These three transactions are initiated by MagnetoCorp and DigiBank (twice), and drive the state changes of paper
00001. Let’s have a look at the transactions that affect this paper in a little more detail:
Issue
Txn = issue
Issuer = MagnetoCorp
Paper = 00001
Issue time = 31 May 2020 09:00:00 EST
Maturity date = 30 November 2020
Face value = 5M USD
See how the issue transaction has a structure with properties and values. This transaction structure is different to, but
closely matches, the structure of paper 00001. That’s because they are different things – paper 00001 reflects a state of
PaperNet that is a result of the issue transaction. It’s the logic behind the issue transaction (which we cannot see) that
takes these properties and creates this paper. Because the transaction creates the paper, it means there’s a very close
relationship between these structures.
The only organization that is involved in the issue transaction is MagnetoCorp. Naturally, MagnetoCorp needs to sign
off on the transaction. In general, the issuer of a paper is required to sign off on a transaction that issues a new paper.
Buy
Next, examine the buy transaction which transfers ownership of paper 00001 from MagnetoCorp to DigiBank:
Txn = buy
Issuer = MagnetoCorp
Paper = 00001
Current owner = MagnetoCorp
New owner = DigiBank
Purchase time = 31 May 2020 10:00:00 EST
Price = 4.94M USD
See how the buy transaction has fewer properties that end up in this paper. That’s because this transaction only
modifies this paper. It’s only New owner = DigiBank that changes as a result of this transaction; everything else
is the same. That’s OK – the most important thing about the buy transaction is the change of ownership, and indeed
in this transaction, there’s an acknowledgement of the current owner of the paper, MagnetoCorp.
You might ask why the Purchase time and Price properties are not captured in paper 00001? This comes
back to the difference between the transaction and the paper. The 4.94 M USD price tag is actually a property of the
transaction, rather than a property of this paper. Spend a little time thinking about this difference; it is not as obvious as
it seems. We’re going to see later that the ledger will record both pieces of information – the history of all transactions
that affect this paper, as well its latest state. Being clear on this separation of information is really important.
It’s also worth remembering that paper 00001 may be bought and sold many times. Although we’re skipping ahead a
little in our scenario, let’s examine what transactions we might see if paper 00001 changes ownership.
If we have a purchase by BigFund:
Txn = buy
Issuer = MagnetoCorp
Paper = 00001
Current owner = DigiBank
New owner = BigFund
Purchase time = 2 June 2020 12:20:00 EST
Price = 4.93M USD
Txn = buy
Issuer = MagnetoCorp
Paper = 00001
Current owner = BigFund
New owner = HedgeMatic
Purchase time = 3 June 2020 15:59:00 EST
Price = 4.90M USD
See how the paper owners changes, and how in out example, the price changes. Can you think of a reason why the
price of MagnetoCorp commercial paper might be falling?
Intuitively, a buy transaction demands that both the selling as well as the buying organization need to sign off on such
a transaction such that there is proof of the mutual agreement among the two parties that are part of the deal.
Redeem
The redeem transaction for paper 00001 represents the end of its lifecycle. In our relatively simple example, DigiBank
initiates the transaction which transfers the commercial paper back to MagnetoCorp:
6.2. Analysis 95
hyperledger-fabricdocs Documentation, Release master
Txn = redeem
Issuer = MagnetoCorp
Paper = 00001
Current owner = HedgeMatic
Redeem time = 30 Nov 2020 12:00:00 EST
Again, notice how the redeem transaction has very few properties; all of the changes to paper 00001 can be calculated
data by the redeem transaction logic: the Issuer will become the new owner, and the Current state will
change to redeemed. The Current owner property is specified in our example, so that it can be checked against
the current holder of the paper.
From a trust perspective, the same reasoning of the buy transaction also applies to the redeem instruction: both
organizations involved in the transaction are required to sign off on it.
In this topic, we’ve seen how transactions and the resultant paper states are the two most important concepts in
PaperNet. Indeed, we’ll see these two fundamental elements in any Hyperledger Fabric distributed ledger – a world
state, that contains the current value of all objects, and a blockchain that records the history of all transactions that
resulted in the current world state.
The required sign-offs on transactions are enforced through rules, which are evaluated before appending a transaction
to the ledger. Only if the required signatures are present, Fabric will accept a transaction as valid.
You’re now in a great place translate these ideas into a smart contract. Don’t worry if your programming is a little
rusty, we’ll provide tips and pointers to understand the program code. Mastering the commercial paper smart contract
is the first big step towards designing your own application. Or, if you’re a business analyst who’s comfortable with a
little programming, don’t be afraid to keep dig a little deeper!
6.3.1 Lifecycle
As we’ve seen, there are two important concepts that concern us when dealing with commercial paper; states and
transactions. Indeed, this is true for all blockchain use cases; there are conceptual objects of value, modelled as
states, whose lifecycle transitions are described by transactions. An effective analysis of states and transactions is an
essential starting point for a successful implementation.
We can represent the life cycle of a commercial paper using a state transition diagram:
The state transition diagram for commercial paper. Commercial papers transition between issued, trading and re-
deemed states by means of the issue, buy and redeem transactions.
See how the state diagram describes how commercial papers change over time, and how specific transactions govern
the life cycle transitions. In Hypledger Fabric, smart contracts implement transaction logic that transition commercial
papers between their different states. Commercial paper states are actually held in the ledger world state; so let’s take
a closer look at them.
A commercial paper can be represented as a set of properties, each with a value. Typically, some combination of these
properties will provide a unique key for each paper.
See how a commercial paper Paper property has value 00001, and the Face value property has value 5M USD.
Most importantly, the Current state property indicates whether the commercial paper is issued,trading or
redeemed. In combination, the full set of properties make up the state of a commercial paper. Moreover, the entire
collection of these individual commercial paper states constitutes the ledger world state.
All ledger state share this form; each has a set of properties, each with a different value. This multi-property aspect
of states is a powerful feature – it allows us to think of a Fabric state as a vector rather than a simple scalar. We
then represent facts about whole objects as individual states, which subsequently undergo transitions controlled by
transaction logic. A Fabric state is implemented as a key/value pair, in which the value encodes the object properties
in a format that captures the object’s multiple properties, typically JSON. The ledger database can support advanced
query operations against these properties, which is very helpful for sophisticated object retrieval.
See how MagnetoCorp’s paper 00001 is represented as a state vector that transitions according to different transaction
stimuli:
A commercial paper state is brought into existence and transitions as a result of different transactions. Hyperledger
Fabric states have multiple properties, making them vectors rather than scalars.
Notice how each individual paper starts with the empty state, which is technically a nil state for the paper, as it
doesn’t exist! See how paper 00001 is brought into existence by the issue transaction, and how it is subsequently
updated as a result of the buy and redeem transactions.
Notice how each state is self-describing; each property has a name and a value. Although all our commercial papers
currently have the same properties, this need not be the case for all time, as Hyperledger Fabric supports different
states having different properties. This allows the same ledger world state to contain different forms of the same asset
as well as different types of asset. It also makes it possible to update a state’s structure; imagine a new regulation that
requires an additional data field. Flexible state properties support the fundamental requirement of data evolution over
time.
In most practical applications, a state will have a combination of properties that uniquely identify it in a given context
– it’s key. The key for a PaperNet commercial paper is formed by a concatenation of the Issuer and paper
properties; so for MagnetoCorp’s first paper, it’s MagnetoCorp00001.
A state key allows us to uniquely identify a paper; it is created as a result of the issue transaction and subsequently
updated by buy and redeem. Hyperledger Fabric requires each state in a ledger to have a unique key.
When a unique key is not available from the available set of properties, an application-determined unique key is
specified as an input to the transaction that creates the state. This unique key is usually with some form of UUID,
which although less readable, is a standard practice. What’s important is that every individual state object in a ledger
must have a unique key.
As we’ve seen, commercial papers in PaperNet are stored as state vectors in a ledger. It’s a reasonable requirement to
be able to query different commercial papers from the ledger; for example: find all the papers issued by MagnetoCorp,
or: find all the papers issued by MagnetoCorp in the redeemed state.
To make these kinds of search tasks possible, it’s helpful to group all related papers together in a logical list. The
PaperNet design incorporates the idea of a commercial paper list – a logical container which is updated whenever
commercial papers are issued or otherwise changed.
Logical representation
It’s helpful to think of all PaperNet commercial papers being in a single list of commercial papers:
MagnetoCorp’s newly created commercial paper 00004 is added to the list of existing commercial papers.
New papers can be added to the list as a result of an issue transaction, and papers already in the list can be updated
with buy or redeem transactions. See how the list has a descriptive name: org.papernet.papers; it’s a really
good idea to use this kind of DNS name because well-chosen names will make your blockchain designs intuitive to
other people. This idea applies equally well to smart contract names.
Physical representation
While it’s correct to think of a single list of papers in PaperNet – org.papernet.papers – lists are best imple-
mented as a set of individual Fabric states, whose composite key associates the state with its list. In this way, each
state’s composite key is both unique and supports effective list query.
Representing a list of PaperNet commercial papers as a set of distinct Hyperledger Fabric states
Notice how each paper in the list is represented by a vector state, with a unique composite key formed by the concate-
nation of org.papernet.paper, Issuer and Paper properties. This structure is helpful for two reasons:
• It allows us to examine any state vector in the ledger to determine which list it’s in, without reference to a
separate list. It’s analogous to looking at set of sports fans, and identifying which team they support by the
colour of the shirt they are wearing. The sports fans self-declare their allegiance; we don’t need a list of fans.
• Hyperlegder Fabric internally uses a concurrency control mechanism to update a ledger, such that keeping
papers in separate state vectors vastly reduces the opportunity for shared-state collisions. Such collisions require
transaction re-submission, complicate application design, and decrease performance.
This second point is actually a key take-away for Hyperledger Fabric; the physical design of state vectors is very
important to optimum performance and behaviour. Keep your states separate!
We have discussed how the different roles in a network, such as issuer, trader or rating agencies as well as different
business interests determine who needs to sign off on a transaction. In Fabric, these rules are captured by so-called
endorsement policies. The rules can be set on a chaincode granularity, as well as for individual state keys.
This means that in PaperNet, we can set one rule for the whole namespace that determines which organizations can
issue new papers. Later, rules can be set and updated for individual papers to capture the trust relationships of buy and
redeem transactions.
In the next topic, we will show you how to combine these design concepts to implement the PaperNet commercial
paper smart contract, and then an application in exploits it!
A smart contract defines the different states of a business object and governs the processes that move the object between
these different states. Smart contracts are important because they allow architects and smart contract developers
to define the key business processes and data that are shared across the different organizations collaborating in a
blockchain network.
In the PaperNet network, the smart contract is shared by the different network participants, such as MagnetoCorp and
DigiBank. The same version of the smart contract must be used by all applications connected to the network so that
they jointly implement the same shared business processes and data.
A copy of the PaperNet commercial paper smart contract is contained in papercontract.js. View it with your
browser, or open it in your favourite editor if you’ve downloaded it.
You may notice from the file path that this is MagnetoCorp’s copy of the smart contract. MagnetoCorp and DigiBank
must agree the version of the smart contract that they are going to use. For now, it doesn’t matter which organization’s
copy you look at, they are all the same.
Spend a few moments looking at the overall structure of the smart contract; notice that it’s quite short! Towards the
top of papercontract.js, you’ll see that there’s a definition for the commercial paper smart contract:
The CommercialPaperContract class contains the transaction definitions for commercial paper – issue, buy
and redeem. It’s these transactions that bring commercial papers into existence and move them through their lifecy-
cle. We’ll examine these transactions soon, but for now notice how CommericalPaperContract extends the
Hyperledger Fabric Contract class. This built-in class, and the Context class, were brought into scope earlier:
Our commercial paper contract will use built-in features of these classes, such as automatic method invocation, a
per-transaction context, transaction handlers, and class-shared state.
Notice also how the class constructor uses its superclass to initialize itself with an explicit contract name:
constructor() {
super('org.papernet.commercialpaper');
}
Most importantly, org.papernet.commercialpaper is very descriptive – this smart contract is the agreed
definition of commercial paper for all PaperNet organizations.
Usually there will only be one smart contract per file – contracts tend to have different lifecycles, which makes it sen-
sible to separate them. However, in some cases, multiple smart contracts might provide syntactic help for applications,
e.g. EuroBond, DollarBond, YenBond, but essentially provide the same function. In such cases, smart contracts
and transactions can be disambiguated.
This function is given control whenever this contract is called to issue a commercial paper. Recall how commercial
paper 00001 was created with the following transaction:
Txn = issue
Issuer = MagnetoCorp
Paper = 00001
Issue time = 31 May 2020 09:00:00 EST
Maturity date = 30 November 2020
Face value = 5M USD
We’ve changed the variable names for programming style, but see how these properties map almost directly to the
issue method variables.
The issue method is automatically given control by the contract whenever an application makes a request to issue a
commercial paper. The transaction property values are made available to the method via the corresponding variables.
See how an application submits a transaction using the Hyperledger Fabric SDK in the application topic, using a
sample application program.
You might have noticed an extra variable in the issue definition – ctx. It’s called the transaction context, and it’s
always first. By default, it maintains both per-contract and per-transaction information relevant to transaction logic.
For example, it would contain MagnetoCorp’s specified transaction identifier, a MagnetoCorp issuing user’s digital
certificate, as well as access to the ledger API.
See how the smart contract extends the default transaction context by implementing its own createContext()
method rather than accepting the default implementation:
createContext() {
return new CommercialPaperContext()
}
constructor() {
super();
// All papers are held in a list of papers
this.paperList = new PaperList(this);
}
We’ll soon see how ctx.paperList can be subsequently used to help store and retrieve all PaperNet commercial
papers.
To solidify your understanding of the structure of a smart contract transaction, locate the buy and redeem transaction
definitions, and see if you can see how they map to their corresponding commercial paper transactions.
The buy transaction:
Txn = buy
Issuer = MagnetoCorp
Paper = 00001
Current owner = MagnetoCorp
New owner = DigiBank
Purchase time = 31 May 2020 10:00:00 EST
Price = 4.94M USD
Txn = redeem
Issuer = MagnetoCorp
Paper = 00001
Redeemer = DigiBank
Redeem time = 31 Dec 2020 12:00:00 EST
In both cases, observe the 1:1 correspondence between the commercial paper transaction and the smart contract method
definition. And don’t worry about the async and await keywords – they allow asynchronous JavaScript functions
to be treated like their synchronous cousins in other programming languages.
Now that you’ve seen how contracts are structured and transactions are defined, let’s focus on the logic within the
smart contract.
Recall the first issue transaction:
Txn = issue
Issuer = MagnetoCorp
Paper = 00001
Issue time = 31 May 2020 09:00:00 EST
Maturity date = 30 November 2020
Face value = 5M USD
// Smart contract, rather than paper, moves paper into ISSUED state
paper.setIssued();
// Add the paper to the list of all similar commercial papers in the ledger world
˓→ state
await ctx.paperList.addPaper(paper);
The logic is simple: take the transaction input variables, create a new commercial paper paper, add it to the list of all
commercial papers using paperList, and return the new commercial paper (serialized as a buffer) as the transaction
response.
See how paperList is retrieved from the transaction context to provide access to the list of commercial papers.
issue(), buy() and redeem() continually re-access ctx.paperList to keep the list of commercial papers
up-to-date.
The logic for the buy transaction is a little more elaborate:
See how the transaction checks currentOwner and that paper is TRADING before changing the owner with
paper.setOwner(newOwner). The basic flow is simple though – check some pre-conditions, set the new owner,
update the commercial paper on the ledger, and return the updated commercial paper (serialized as a buffer) as the
transaction response.
Why don’t you see if you can understand the logic for the redeem transaction?
We’ve seen how to define and implement the issue, buy and redeem transactions using the CommercialPaper and
PaperList classes. Let’s end this topic by seeing how these classes work.
Locate the CommercialPaper class in the paper.js file:
This class contains the in-memory representation of a commercial paper state. See how the createInstance
method initializes a new commercial paper with the provided parameters:
See how every time the issue transaction is called, a new in-memory instance of a commercial paper is created con-
taining the transaction data.
A few important points to note:
• This is an in-memory representation; we’ll see later how it appears on the ledger.
• The CommercialPaper class extends the State class. State is an application-defined class which creates
a common abstraction for a state. All states have a business object class which they represent, a composite key,
can be serialized and de-serialized, and so on. State helps our code be more legible when we are storing more
than one business object type on the ledger. Examine the State class in the state.js file.
• A paper computes its own key when it is created – this key will be used when the ledger is accessed. The key is
formed from a combination of issuer and paperNumber.
constructor(obj) {
super(CommercialPaper.getClass(), [obj.issuer, obj.paperNumber]);
Object.assign(this, obj);
}
• A paper is moved to the ISSUED state by the transaction, not by the paper class. That’s because it’s the smart
contract that governs the lifecycle state of the paper. For example, an import transaction might create a new
set of papers immediately in the TRADING state.
The rest of the CommercialPaper class contains simple helper methods:
getOwner() {
return this.owner;
}
Recall how methods like this were used by the smart contract to move the commercial paper through its lifecycle. For
example, in the redeem transaction we saw:
This utility class is used to manage all PaperNet commercial papers in Hyperledger Fabric state database. The Pa-
perList data structures are described in more detail in the architecture topic.
Like the CommercialPaper class, this class extends an application-defined StateList class which creates a
common abstraction for a list of states – in this case, all the commercial papers in PaperNet.
The addPaper() method is a simple veneer over the StateList.addState() method:
async addPaper(paper) {
return this.addState(paper);
}
You can see in the StateList.js file how the StateList class uses the Fabric API putState() to write the
commercial paper as state data in the ledger:
async addState(state) {
let key = this.ctx.stub.createCompositeKey(this.name, state.getSplitKey());
let data = State.serialize(state);
await this.ctx.stub.putState(key, data);
}
Every piece of state data in a ledger requires these two fundamental elements:
• Key: key is formed with createCompositeKey() using a fixed name and the key of state. The name
was assigned when the PaperList object was constructed, and state.getSplitKey() determines each
state’s unique key.
• Data: data is simply the serialized form of the commercial paper state, created using the State.
serialize() utility method. The State class serializes and deserializes data using JSON, and the State’s
business object class as required, in our case CommercialPaper, again set when the PaperList object
was constructed.
Notice how a StateList doesn’t store anything about an individual state or the total list of states – it delegates all
of that to the Fabric state database. This is an important design pattern – it reduces the opportunity for ledger MVCC
collisions in Hyperledger Fabric.
The StateList getState() and updateState() methods work in similar ways:
async getState(key) {
let ledgerKey = this.ctx.stub.createCompositeKey(this.name, State.splitKey(key));
let data = await this.ctx.stub.getState(ledgerKey);
let state = State.deserialize(data, this.supportedClasses);
return state;
}
async updateState(state) {
let key = this.ctx.stub.createCompositeKey(this.name, state.getSplitKey());
let data = State.serialize(state);
await this.ctx.stub.putState(key, data);
}
See how they use the Fabric APIs putState(), getState() and createCompositeKey() to access the
ledger. We’ll expand this smart contract later to list all commercial papers in paperNet – what might the method look
like to implement this ledger retrieval?
That’s it! In this topic you’ve understood how to implement the smart contract for PaperNet. You can move to the next
sub topic to see how an application calls the smart contract using the Fabric SDK.
6.5 Application
To help your understanding, we’ll make reference to the commercial paper sample application provided with Hy-
perledger Fabric. You can download it and run it locally. It is written in JavaScript, but the logic is quite language
independent, so you’ll be easily able to see what’s going on! (The sample will become available for Java and GOLANG
as well.)
An application interacts with a blockchain network using the Fabric SDK. Here’s a simplified diagram of how an
application invokes a commercial paper smart contract:
A PaperNet application invokes the commercial paper smart contract to submit an issue transaction request.
An application has to follow six basic steps to submit a transaction:
• Select an identity from a wallet
• Connect to a gateway
• Access the desired network
• Construct a transaction request for a smart contract
• Submit the transaction to the network
• Process the response
You’re going to see how a typical application performs these six steps using the Fabric SDK. You’ll find the application
code in the issue.js file. View it in your browser, or open it in your favourite editor if you’ve downloaded it. Spend
a few moments looking at the overall structure of the application; even with comments and spacing, it’s only 100 lines
of code!
6.5.2 Wallet
Towards the top of issue.js, you’ll see two Fabric classes are brought into scope:
You can read about the fabric-network classes in the node SDK documentation, but for now, let’s see how they
are used to connect MagnetoCorp’s application to PaperNet. The application uses the Fabric Wallet class as follows:
See how wallet locates a wallet in the local filesystem. The identity retrieved from the wallet is clearly for a user
called Isabella, who is using the issue application. The wallet holds a set of identities – X.509 digital certificates –
which can be used to access PaperNet or any other Fabric network. If you run the tutorial, and look in this directory,
you’ll see the identity credentials for Isabella.
Think of a wallet holding the digital equivalents of your government ID, driving license or ATM card. The X.509
digital certificates within it will associate the holder with a organization, thereby entitling them to rights in a network
channel. For example, Isabella might be an administrator in MagnetoCorp, and this could give her more privileges
than a different user – Balaji from DigiBank. Moreover, a smart contract can retrieve this identity during smart
contract processing using the transaction context.
Note also that wallets don’t hold any form of cash or tokens – they hold identities.
6.5.3 Gateway
The second key class is a Fabric Gateway. Most importantly, a gateway identifies one or more peers that provide
access to a network – in our case, PaperNet. See how issue.js connects to its gateway:
Right now, we’re only interested in the channels: and peers: sections of the profile: (We’ve modified the details
slightly to better explain what’s happening.)
channels:
papernet:
peers:
peer1.magnetocorp.com:
endorsingPeer: true
eventSource: true
peer2.digibank.com:
endorsingPeer: true
eventSource: true
peers:
peer1.magnetocorp.com:
(continues on next page)
peer2.digibank.com:
url: grpcs://localhost:8051
grpcOptions:
ssl-target-name-override: peer1.digibank.com
tlsCACerts:
path: certificates/digibank/digibank.com-cert.pem
See how channel: identifies the PaperNet: network channel, and two of its peers. MagnetoCorp has peer1.
magenetocorp.com and DigiBank has peer2.digibank.com, and both have the role of endorsing peers. Link
to these peers via the peers: key, which contains details about how to connect to them, including their respective
network addresses.
The connection profile contains a lot of information – not just peers – but network channels, network orderers, organi-
zations, and CAs, so don’t worry if you don’t understand all of it!
Let’s now turn our attention to the connectionOptions object:
let connectionOptions = {
identity: userName,
wallet: wallet
}
See how it specifies that identity, userName, and wallet, wallet, should be used to connect to a gateway. These
were assigned values earlier in the code.
There are other connection options which an application could use to instruct the SDK to act intelligently on its behalf.
For example:
let connectionOptions = {
identity: userName,
wallet: wallet,
eventHandlerOptions: {
commitTimeout: 100,
strategy: EventStrategies.MSPID_SCOPE_ANYFORTX
},
}
Here, commitTimeout tells the SDK to wait 100 seconds to hear whether a transaction has been commit-
ted. And strategy: EventStrategies.MSPID_SCOPE_ANYFORTX specifies that the SDK can no-
tify an application after a single MagnetoCorp peer has confirmed the transaction, in contrast to strategy:
EventStrategies.NETWORK_SCOPE_ALLFORTX which requires that all peers from MagnetoCorp and Di-
giBank to confirm the transaction.
If you’d like to, read more about how connection options allow applications to specify goal-oriented behaviour without
having to worry about how it is achieved.
The peers defined in the gateway connectionProfile.yaml provide issue.js with access to PaperNet. Be-
cause these peers can be joined to multiple network channels, the gateway actually provides the application with access
to multiple network channels!
See how the application selects a particular channel:
From this point onwards, network will provide access to PaperNet. Moreover, if the application wanted to access
another network, BondNet, at the same time, it is easy:
Now our application has access to a second network, BondNet, simultaneously with PaperNet!
We can see here a powerful feature of Hyperledger Fabric – applications can participate in a network of networks, by
connecting to multiple gateway peers, each of which is joined to multiple network channels. Applications will have
different rights in different channels according to their wallet identity provided in gateway.connect().
The application is now ready to issue a commercial paper. To do this, it’s going to use
CommercialPaperContract and again, its fairly straightforward to access this smart contract:
Note how the application provides a name – papercontract – and an explicit contract name: org.papernet.
commercialpaper! We see how a contract name picks out one contract from the papercontract.js chain-
code file that contains many contracts. In PaperNet, papercontract.js was installed and instantiated with the
name papercontract, and if you’re interested, read how to install and instantiate a chaincode containing multiple
smart contracts.
If our application simultaneously required access to another contract in PaperNet or BondNet this would be easy:
In these examples, note how we didn’t use a qualifying contract name – we have only one smart contract per file, and
getContract() will use the first contract it finds.
Recall the transaction MagnetoCorp uses to issue its first commercial paper:
Txn = issue
Issuer = MagnetoCorp
Paper = 00001
Issue time = 31 May 2020 09:00:00 EST
Maturity date = 30 November 2020
Face value = 5M USD
See how the submitTransaction() parameters match those of the transaction request. It’s these values that
will be passed to the issue() method in the smart contract, and used to create a new commercial paper. Recall its
signature:
It might appear that a smart contract receives control shortly after the application issues submitTransaction(),
but that’s not the case. Under the covers, the SDK uses the connectionOptions and connectionProfile
details to send the transaction proposal to the right peers in the network, where it can get the required endorsements.
But the application doesn’t need to worry about any of this – it just issues submitTransaction and the SDK takes
care of it all!
Let’s now turn our attention to how the application handles the response!
Recall from papercontract.js how the issue transaction returns a commercial paper response:
return paper.toBuffer();
You’ll notice a slight quirk – the new paper needs to be converted to a buffer before it is returned to the application.
Notice how issue.js uses the class method CommercialPaper.fromBuffer() to rehydrate the response
buffer as a commercial paper:
See how the same paper class has been used in both the application and smart contract – if you structure your code
like this, it’ll really help readability and reuse.
As with the transaction proposal, it might appear that the application receives control soon after the smart contract
completes, but that’s not the case. Under the covers, the SDK manages the entire consensus process, and notifies the
application when it is complete according to the strategy connectionOption. If you’re interested in what the SDK
does under the covers, read the detailed transaction flow.
That’s it! In this topic you’ve understood how to call a smart contract from a sample application by examining how
MagnetoCorp’s application issues a new commercial paper in PaperNet. Now examine the key ledger and smart
contract data structures are designed by in the architecture topic behind them.
6.6 APIs
This section elaborates the key features for client application and smart contract development found in Hyperledger
Fabric. A solid understanding of the features will help you design and implement efficient and effective solutions.
Chaincode
In the Developing Applications topic, we can see how the Fabric SDKs provide high level programming abstractions
which help application and smart contract developers to focus on their business problem, rather than the low level
details of how to interact with a Fabric network.
Smart contracts are one example of a high level programming abstraction, and it is possible to define smart contracts
within in a chaincode container. When a chaincode is installed and instantiated, all the smart contracts within it are
made available to the corresponding channel.
Multiple smart contracts can be defined within a chaincode. Each is uniquely identified by their name within a
chaincode.
In the diagram above, chaincode A has three smart contracts defined within it, whereas chaincode B has four smart
contracts. See how the chaincode name is used to fully qualify a particular smart contract.
The ledger structure is defined by a set of deployed smart contracts. That’s because the ledger contains facts about the
business objects of interest to the network (such as commercial paper within PaperNet), and these business objects are
moved through their lifecycle (e.g. issue, buy, redeem) by the transaction functions defined within a smart contract.
In most cases, a chaincode will only have one smart contract defined within it. However, it can make sense to keep
related smart contracts together in a single chaincode. For example, commercial papers denominated in different cur-
rencies might have contracts EuroPaperContract, DollarPaperContract, YenPaperContract which
might need to be kept synchronized with each other in the channel to which they are deployed.
Name
Each smart contract within a chaincode is uniquely identified by its contract name. A smart contract can explicitly
assign this name when the class is constructed, or let the Contract class implicitly assign a default name.
Examine the papercontract.js chaincode file:
constructor() {
// Unique name when multiple contracts per chaincode file
super('org.papernet.commercialpaper');
}
See how the CommercialPaperContract constructor specifies the contract name as org.papernet.
commercialpaper. The result is that within the papercontract chaincode, this smart contract is now as-
sociated with the contract name org.papernet.commercialpaper.
If an explicit contract name is not specified, then a default name is assigned – the name of the class. In our example,
the default contract name would be CommercialPaperContract.
Choose your names carefully. It’s not just that each smart contract must have a unique name; a well-chosen name
is illuminating. Specifically, using an explicit DNS-style naming convention is recommended to help organize clear
and meaningful names; org.papernet.commercialpaper conveys that the PaperNet network has defined a
standard commercial paper smart contract.
Contract names are also helpful to disambiguate different smart contract transaction functions with the same name in
a given chaincode. This happens when smart contracts are closely related; their transaction names will tend to be the
same. We can see that a transaction is uniquely defined within a channel by the combination of its chaincode and smart
contract name.
Contract names must be unique within a chaincode file. Some code editors will detect multiple definitions of the same
class name before deployment. Regardless the chaincode will return an error if multiple classes with the same contract
name are explicitly or implicitly specified.
Application
Once a chaincode has been installed on a peer and instantiated on a channel, the smart contracts in it are accessible to
an application:
See how the application accesses the smart contract with the contract.getContract() method. The
papercontract chaincode name org.papernet.commercialpaper returns a contract reference which
can be used to submit transactions to issue commercial paper with the contract.submitTransaction() API.
Default contract
The first smart contract defined in a chaincode is the called the default smart contract. A default is helpful because
a chaincode will usually have one smart contract defined within it; a default allows the application to access those
transactions directly – without specifying a contract name.
This works because the default smart contract in papercontract is CommercialPaperContract and it has
an issue transaction. Note that the issue transaction in BondContract can only be invoked by explicitly
addressing it. Likewise, even though the cancel transaction is unique, because BondContract is not the default
smart contract, it must also be explicitly addressed.
In most cases, a chaincode will only contain a single smart contract, so careful naming of the chaincode can reduce the
need for developers to care about chaincode as a concept. In the example code above it feels like papercontract
is a smart contract.
In summary, contract names are a straightforward mechanism to identify individual smart contracts within a given
chaincode. Contract names make it easy for applications to find a particular smart contract and use it to access the
ledger.
can do this by performing a chaincode-to-chaincode invocation. Finally, a blockchain can contain transactions which
relate to different world states.
In this topic, we’re going to cover:
• The importance of namespaces
• What is a chaincode namespace
• Channels and namespaces
• How to use chaincode namespaces
• How to access world states across smart contracts
• Design considerations for chaincode namespaces
Motivation
A namespace is a common concept. We understand that Park Street, New York and Park Street, Seattle are different
streets even though they have the same name. The city forms a namespace for Park Street, simultaneously providing
freedom and clarity.
It’s the same in a computer system. Namespaces allow different users to program and operate different parts of a
shared system, without getting in each other’s way. Many programming languages have namespaces so that programs
can freely assign unique identifiers, such as variable names, without worrying about other programs doing the same.
We’ll see that Hyperledger Fabric uses namespaces to help smart contracts keep their ledger world state separate from
other smart contracts.
Scenario
Let’s examine how the ledger world state organizes facts about business objects that are important to the organizations
in a channel using the diagram below. Whether these objects are commercial papers, bonds, or vehicle registrations,
and wherever they are in their lifecycle, they are maintained as states within the ledger world state database. A smart
contract manages these business objects by interacting with the ledger (world state and blockchain), and in most cases
this will involve it querying or updating the ledger world state.
It’s vitally important to understand that the ledger world state is partitioned according to the chaincode of the smart
contract that accesses it, and this partitioning, or namespacing is an important design consideration for architects,
administrators and programmers.
The ledger world state is separated into different namespaces according to the chaincode that accesses it. Within
a given channel, smart contracts in the same chaincode share the same world state, and smart contracts in different
chaincodes cannot directly access each other’s world state. Likewise, a blockchain can contain transactions that relate
to different chaincode world states.
In our example, we can see four smart contracts defined in two different chaincodes, each of which is in their own
chaincode container. The euroPaper and yenPaper smart contracts are defined in the papers chaincode. The
situation is similar for the euroBond and yenBond smart contracts – they are defined in the bonds chaincode. This
design helps application programmers understand whether they are working with commercial papers or bonds priced
in Euros or Yen, and because the rules for each financial product don’t really change for different currencies, it makes
sense to manage their deployment in the same chaincode.
The diagram also shows the consequences of this deployment choice. The database management system (DBMS)
creates different world state databases for the papers and bonds chaincodes and the smart contracts contained
within them. World state A and world state B are each held within distinct databases; the data are isolated
from each other such that a single world state query (for example) cannot access both world states. The world state is
said to be namespaced according to its chaincode.
See how world state A contains two lists of commercial papers paperListEuro and paperListYen. The
states PAP11 and PAP21 are instances of each paper managed by the euroPaper and yenPaper smart contracts
respectively. Because they share the same chaincode namespace, their keys (PAPxyz) must be unique within the
namespace of the papers chaincode, a little like a street name is unique within a town. Notice how it would be
possible to write a smart contract in the papers chaincode that performed an aggregate calculation over all the
commercial papers – whether priced in Euros or Yen – because they share the same namespace. The situation is
similar for bonds – they are held within world state B which maps to a separate bonds database, and their keys
must be unique.
Just as importantly, namespaces mean that euroPaper and yenPaper cannot directly access world state B,
and that euroBond and yenBond cannot directly access world state A. This isolation is helpful, as commercial
papers and bonds are very distinct financial instruments; they have different attributes and are subject to different rules.
It also means that papers and bonds could have the same keys, because they are in different namespaces. This is
helpful; it provides a significant degree of freedom for naming. Use this freedom to name different business objects
meaningfully.
Most importantly, we can see that a blockchain is associated with the peer operating in a particular channel, and that
it contains transactions that affect both world state A and world state B. That’s because the blockchain is
the most fundamental data structure contained in a peer. The set of world states can always be recreated from this
blockchain, because they are the cumulative results of the blockchain’s transactions. A world state helps simplify
smart contracts and improve their efficiency, as they usually only require the current value of a state. Keeping world
states separate via namespaces helps smart contracts isolate their logic from other smart contracts, rather than having
to worry about transactions that correspond to different world states. For example, a bonds contract does not need to
worry about paper transactions, because it cannot see their resultant world state.
It’s also worth noticing that the peer, chaincode containers and DBMS all are logically different processes. The peer
and all its chaincode containers are always in physically separate operating system processes, but the DBMS can be
configured to be embedded or separate, depending on its type. For LevelDB, the DBMS is wholly contained within
the peer, but for CouchDB, it is a separate operating system process.
It’s important to remember that namespace choices in this example are the result of a business requirement to share
commercial papers in different currencies but isolate them separate from bonds. Think about how the namespace
structure would be modified to meet a business requirement to keep every financial asset class separate, or share all
commercial papers and bonds?
Channels
If a peer is joined to multiple channels, then a new blockchain is created and managed for each channel. Moreover,
every time a chaincode is instantiated in a new channel, a new world state database is created for it. It means that the
channel also forms a kind of namespace alongside that of the chaincode for the world state.
However, the same peer and chaincode container processes can be simultaneously joined to multiple channels – unlike
blockchains, and world state databases, these processes do not increase with the number of channels joined.
For example, if the papers and bonds chaincodes were instantiated on a new channel, there would a totally separate
blockchain created, and two new world state databases created. However, the peer and chaincode containers would
not increase; each would just be connected to multiple channels.
Usage
Let’s use our commercial paper example to show how an application uses a smart contract with namespaces. It’s worth
noting that an application communicates with the peer, and the peer routes the request to the appropriate chaincode
container which then accesses the DBMS. This routing is done by the peer core component shown in the diagram.
Here’s the code for an application that uses both commercial papers and bonds, priced in Euros and Yen. The code is
fairly self-explanatory:
• Submits an issue transaction to the network for commercial paper PAP11 using the euroPaper contract.
See interaction point 1a. This results in the creation of a commercial paper represented by state PAP11 in
world state A; interaction point 1b. This operation is captured as a transaction in the blockchain at inter-
action point 1c.
• Submits a redeem transaction to the network for commercial paper PAP21 using the yenPaper contract. See
interaction point 2a. This results in the creation of a commercial paper represented by state PAP21 in world
state A; interaction point 2b. This operation is captured as a transaction in the blockchain at interaction point
2c.
• Submits a buy transaction to the network for bond BON31 using the euroBond contract. See interaction point
3a. This results in the creation of a bond represented by state BON31 in world state B; interaction point
3b. This operation is captured as a transaction in the blockchain at interaction point 3c.
• Submits a sell transaction to the network for bond BON41 using the yenBond contract. See interaction point
4a. This results in the creation of a bond represented by state BON41 in world state B; interaction point
4b. This operation is captured as a transaction in the blockchain at interaction point 4c.
See how smart contracts interact with the world state:
• euroPaper and yenPaper contracts can directly access world state A, but cannot directly access
world state B. World state A is physically held in the papers database in the database manage-
ment system (DBMS) corresponding to the papers chaincode.
• euroBond and yenBond contracts can directly access world state B, but cannot directly access world
state A. World state B is physically held in the bonds database in the database management system
(DBMS) corresponding to the bonds chaincode.
See how the blockchain captures transactions for all world states:
• Interactions 1c and 2c correspond to transactions create and update commercial papers PAP11 and PAP21
respectively. These are both contained within world state A.
• Interactions 3c and 4c correspond to transactions both update bonds BON31 and BON41. These are both con-
tained within world state B.
• If world state A or world state B were destroyed for any reason, they could be recreated by replaying
all the transactions in the blockchain.
As we saw in our example scenario, euroPaper and yenPaper cannot directly access world state B. That’s
because we have designed our chaincodes and smart contracts so that these chaincodes and world states are kept
separately from each other. However, let’s imagine that euroPaper needs to access world state B.
Why might this happen? Imagine that when a commercial paper was issued, the smart contract wanted to price the
paper according to the current return on bonds with a similar maturity date. In this case it will be necessary for the
euroPaper contract to be able to query the price of bonds in world state B. Look at the following diagram to
see how we might structure this interaction.
How chaincodes and smart contracts can indirectly access another world state – via its chaincode.
Notice how:
• the application submits an issue transaction in the euroPaper smart contract to issue PAP11. See interac-
tion 1a.
• the issue transaction in the euroPaper smart contract calls the query transaction in the euroBond smart
contract. See interaction point 1b.
• the queryin euroBond can retrieve information from world state B. See interaction point 1c.
• when control returns to the issue transaction, it can use the information in the response to price the paper and
update world state A with information. See interaction point 1d.
• the flow of control for issuing commercial paper priced in Yen is the same. See interaction points 2a, 2b, 2c and
2d.
Control is passed between chaincode using the invokeChaincode() API. This API passes control from one
chaincode to another chaincode.
Although we have only discussed query transactions in the example, it is possible to invoke a smart contract which
will update the called chaincode’s world state. See the considerations below.
Considerations
Structure
Before, After and Unknown transaction handlers. In this example, BeforeFunction() is called before the is-
sue, buy and redeem transactions. AfterFunction() is called after the issue, buy and redeem transactions.
UnknownFunction() is only called if a request is made to invoke a transaction not defined in the smart contract.
(The diagram is simplified by not repeating BeforeFunction and AfterFunction boxes for each transaction.
Types of handler
There are three types of transaction handlers which cover different aspects of the interaction between an application
and a smart contract:
• Before handler: is called before every smart contract transaction is invoked. The handler will usually modify
the transaction context to be used by the transaction. The handler has access to the full range of Fabric APIs;
for example, it can issue getState() and putState().
• After handler: is called after every smart contract transaction is invoked. The handler will usually perform
post-processing common to all transactions, and also has full access to the Fabric APIs.
• Unknown handler: is called if an attempt is made to invoke a transaction that is not defined in a smart contract.
Typically, the handler will record the failure for subsequent processing by an administrator. The handler has full
access to the Fabric APIs.
Defining a handler
Transaction handlers are added to the smart contract as methods with well defined names. Here’s an example which
adds a handler of each type:
...
async beforeTransaction(ctx) {
// Write the transaction ID as an informational to the console
console.info(ctx.stub.getTxID());
};
async unknownTransaction(ctx) {
// This handler throws an exception
throw new Error('Unknown transaction function');
};
The form of a transaction handler definition is the similar for all handler types, but notice how the
afterTransaction(ctx, result) also receives any result returned by the transaction.
Handler processing
Once a handler has been added to the smart contract, it can be invoked during transaction processing. During process-
ing, the handler receives ctx, the transaction context, performs some processing, and returns control as it completes.
Processing continues as follows:
• Before handler: If the handler completes successfully, the transaction is called with the updated context. If the
handler throws an exception, then the transaction is not called and the smart contract fails with the exception
error message.
• After handler: If the handler completes successfully, then the smart contract completes as determined by the
invoked transaction. If the handler throws an exception, then the transaction fails with the exception error
message.
• Unknown handler: The handler should complete by throwing an exception with the required error message. If
an Unknown handler is not specified, or an exception is not thrown by it, there is sensible default processing;
the smart contract will fail with an unknown transaction error message.
If the handler requires access to the function and parameters, then it is easy to do this:
async beforeTransaction(ctx) {
// Retrieve details of the transaction
let txnDetails = ctx.stub.getFunctionAndParameters();
Multiple handlers
It is only possible to define at most one handler of each type for a smart contract. If a smart contract needs to invoke
multiple functions during before, after or unknown handling, it should coordinate this from within the appropriate
function.
Scenario
A connection profile is used to configure a gateway. Gateways are important for many reasons, the primary being to
simplify an application’s interaction with a network channel.
Two applications, issue and buy, use gateways 1&2 configured with connection profiles 1&2. Each profile describes a
different subset of MagnetoCorp and DigiBank network components. Each connection profile must contain sufficient
information for a gateway to interact with the network on behalf of the issue and buy applications. See the text for a
detailed explanation.
A connection profile contains a description of a network view, expressed in a technical syntax, which can either be
JSON or YAML. In this topic, we use the YAML representation, as it’s easier for you to read. Static gateways need
more information than dynamic gateways because the latter can use service discovery to dynamically augment the
information in a connection profile.
A connection profile should not be an exhaustive description of a network channel; it just needs to contain enough
information sufficient for a gateway that’s using it. In the network above, connection profile 1 needs to contain at least
the endorsing organizations and peers for the issue transaction, as well as identifying the peers that will notify the
gateway when the transaction has been committed to the ledger.
It’s easiest to think of a connection profile as describing a view of the network. It could be a comprehensive view, but
that’s unrealistic for a few reasons:
• Peers, orderers, certificate authorities, channels, and organizations are added and removed according to demand.
• Components can start and stop, or fail unexpectedly (e.g. power outage).
• A gateway doesn’t need a view of the whole network, only what’s necessary to successfully handle transaction
submission or event notification for example.
• Service Discovery can augment the information in a connection profile. Specifically, dynamic gateways can be
configured with minimal Fabric topology information; the rest can be discovered.
A static connection profile is normally created by an administrator who understands the network topology in detail.
That’s because a static profile can contain quite a lot of information, and an administrator needs to capture this in
the corresponding connection profile. In contrast, dynamic profiles minimize the amount of definition required, and
therefore can be a better choice for developers who want to get going quickly, or administrators who want to create
a more responsive gateway. Connection profiles are created in either the YAML or JSON format using an editor of
choice.
Usage
We’ll see how to define a connection profile in a moment; let’s first see how it is used by a sample MagnetoCorp
issue application:
After loading some required classes, see how the paperNet.yaml gateway file is loaded from the file system,
converted to a JSON object using the yaml.safeLoad() method, and used to configure a gateway using its
connect() method.
By configuring a gateway with this connection profile, the issue application is providing the gateway with the relevant
network topology it should use to process transactions. That’s because the connection profile contains sufficient infor-
mation about the PaperNet channels, organizations, peers, orderers and CAs to ensure transactions can be successfully
processed.
It’s good practice for a connection profile to define more than one peer for any given organization – it prevents a single
point of failure. This practice also applies to dynamic gateways; to provide more than one starting point for service
discovery.
A DigiBank buy application would typically configure its gateway with a similar connection profile, but with some
important differences. Some elements will be the same, such as the channel; some elements will overlap, such as the
endorsing peers. Other elements will be completely different, such as notification peers or certificate authorities for
example.
The connectionOptions passed to a gateway complement the connection profile. They allow an application
to declare how it would like the gateway to use the connection profile. They are interpreted by the SDK to control
interaction patterns with network components, for example to select which identity to connect with, or which peers to
use for event notifications. Read about the list of available connection options and when to use them.
Structure
To help you understand the structure of a connection profile, we’re going to step through an example for the network
shown above. Its connection profile is based on the PaperNet commercial paper sample, and stored in the GitHub
repository. For convenience, we’ve reproduced it below. You will find it helpful to display it in another browser
window as you now read about it:
• Line 9: name: "papernet.magnetocorp.profile.sample"
This is the name of the connection profile. Try to use DNS style names; they are a very easy way to convey
meaning.
• Line 16: x-type: "hlfv1"
Users can add their own x- properties that are “application-specific” – just like with HTTP headers. They are
provided primarily for future use.
• Line 20: description: "Sample connection profile for documentation topic"
A short description of the connection profile. Try to make this helpful for the reader who might be seeing this
for the first time!
• Line 25: version: "1.0"
The schema version for this connection profile. Currently only version 1.0 is supported, and it is not envisioned
that this schema will change frequently.
address and port, and gRPC options that can override the defaults used when communicating with the orderer,
if necessary. As with peers:, for high availability, specifying more than one orderer is a good idea.
• Line 152: peers:
The physical information for all previous peers is now listed. This connection profile has three peers for Mag-
netoCorp: peer1, peer2, and peer3; for DigiBank, a single peer peer9 has its information listed. For
each peer, as with orderers, their IP address and port is listed, together with gRPC options that can override the
defaults used when communicating with a particular peer, if necessary.
• Line 194: certificateAuthorities:
The physical information for certificate authorities is now listed. The connection profile has a single CA listed
for MagnetoCorp, ca1-magnetocorp, and its physical information follows. As well as IP details, the reg-
istrar information allows this CA to be used for Certificate Signing Requests (CSR). These are used to request
new certificates for locally generated public/private key pairs.
Now you’ve understood a connection profile for MagnetoCorp, you might like to look at a corresponding profile
for DigiBank. Locate where the profile is the same as MagnetoCorp’s, see where it’s similar, and finally where it’s
different. Think about why these differences make sense for DigiBank applications.
That’s everything you need to know about connection profiles. In summary, a connection profile defines sufficient
channels, organizations, peers, orderers and certificate authorities for an application to configure a gateway. The
gateway allows the application to focus on business logic rather than the details of the network topology.
Sample
This file is reproduced inline from the GitHub commercial paper sample.
1: ---
2: #
3: # [Required]. A connection profile contains information about a set of network
4: # components. It is typically used to configure gateway, allowing applications
5: # interact with a network channel without worrying about the underlying
6: # topology. A connection profile is normally created by an administrator who
7: # understands this topology.
8: #
9: name: "papernet.magnetocorp.profile.sample"
10: #
11: # [Optional]. Analogous to HTTP, properties with an "x-" prefix are deemed
12: # "application-specific", and ignored by the gateway. For example, property
13: # "x-type" with value "hlfv1" was originally used to identify a connection
14: # profile for Fabric 1.x rather than 0.x.
15: #
16: x-type: "hlfv1"
17: #
18: # [Required]. A short description of the connection profile
19: #
20: description: "Sample connection profile for documentation topic"
21: #
22: # [Required]. Connection profile schema version. Used by the gateway to
23: # interpret these data.
24: #
25: version: "1.0"
26: #
27: # [Optional]. A logical description of each network channel; its peer and
28: # orderer names and their roles within the channel. The physical details of
29: # these components (e.g. peer IP addresses) will be specified later in the
(continues on next page)
Scenario
A connection option specifies a particular aspect of a gateway’s behaviour. Gateways are important for many reasons,
the primary being to allow an application to focus on business logic and smart contracts, while it manages interactions
with the many components of a network.
The different interaction points where connection options control behaviour. These options are explained fully in the
text.
One example of a connection option might be to specify that the gateway used by the issue application should use
identity Isabella to submit transactions to the papernet network. Another might be that a gateway should wait
for all three nodes from MagnetoCorp to confirm a transaction has been committed returning control. Connection
options allow applications to specify the precise behaviour of a gateway’s interaction with the network. Without a
gateway, applications need to do a lot more work; gateways save you time, make your application more readable, and
less error prone.
Usage
We’ll describe the full set of connection options available to an application in a moment; let’s first see see how they
are specified by the sample MagnetoCorp issue application:
const connectionOptions = {
identity: userName,
wallet: wallet,
eventHandlerOptions: {
commitTimeout: 100,
strategy: EventStrategies.MSPID_SCOPE_ANYFORTX
}
};
See how the identity and wallet options are simple properties of the connectionOptions object. They
have values userName and wallet respectively, which were set earlier in the code. Contrast these options with the
eventHandlerOptions option which is an object in its own right. It has two properties: commitTimeout:
100 (measured in seconds) and strategy: EventStrategies.MSPID_SCOPE_ANYFORTX.
See how connectionOptions is passed to a gateway as a complement to connectionProfile; the network
is identified by the connection profile and the options specify precisely how the gateway should interact with it. Let’s
now look at the available options.
Options
Considerations
The following list of considerations is helpful when deciding how to choose connection options.
• eventHandlerOptions.commitTimeout and eventHandlerOptions.strategy work
together. For example, commitTimeout: 100 and strategy: EventStrategies.
MSPID_SCOPE_ANYFORTX means that the gateway will wait for up to 100 seconds for any peer to
confirm a transaction has been committed. In contrast, specifying strategy: EventStrategies.
NETWORK_SCOPE_ALLFORTX means that the gateway will wait up to 100 seconds for all peers in all
organizations.
• The default value of eventHandlerOptions.strategy: EventStrategies.
MSPID_SCOPE_ALLFORTX will wait for all peers in the application’s organization to commit the transaction.
This is a good default because applications can be sure that all their peers have an up-to-date copy of the ledger,
minimizing concurrency issues.
However, as the number of peers in an organization grows, it becomes a little unnecessary to wait for all peers,
in which case using a pluggable event handler can provide a more efficient strategy. For example the same set
of peers could be used to submit transactions and listen for notifications, on the safe assumption that consensus
will keep all ledgers synchronized.
• Service discovery requires clientTlsIdentity to be set. That’s because the peers exchanging informa-
tion with an application need to be confident that they are exchanging information with entities they trust. If
clientTlsIdentity is not set, then discovery will not be obeyed, regardless of whether or not it is set.
• Although applications can set connection options when they connect to the gateway, it can be necessary for
these options to be overridden by an administrator. That’s because options relate to network interactions, which
can vary over time. For example, an administrator trying to understand the effect of using service discovery on
network performance.
A good approach is to define application overrides in a configuration file which is read by the application when
it configures its connection to the gateway.
Because the discovery options enabled and asLocalHost are most frequently required to
be overridden by administrators, the environment variables INITIALIIZE-WITH-DISCOVERY and
DISCOVERY-AS-LOCALHOST are provided for convenience. The administrator should set these in the pro-
duction runtime environment of the application, which will most likely be a docker container.
6.7.8 Wallet
Scenario
When an application connects to a network channel such as PaperNet, it selects a user identity to do so, for example
ID1. The channel MSPs associate ID1 with a role within a particular organization, and this role will ultimately
determine the application’s rights over channel resources. For example, ID1 might identify a user as a member of
the MagnetoCorp organization who can read and write to the ledger, whereas ID2 might identify an administrator in
MagnetoCorp who can add a new organization to a consortium.
Two users, Isabella and Balaji have wallets containing different identities they can use to connect to different network
channels, PaperNet and BondNet.
Consider the example of two users; Isabella from MagnetoCorp and Balaji from DigiBank. Isabella is going to use
App 1 to invoke a smart contract in PaperNet and a different smart contract in BondNet. Similarly, Balaji is going to
use App 2 to invoke smart contracts, but only in PaperNet. (It’s very easy for applications to access multiple networks
and multiple smart contracts within them.)
See how:
• MagnetoCorp uses CA1 to issue identities and DigiBank uses CA2 to issue identities. These identities are stored
in user wallets.
• Balaji’s wallet holds a single identity, ID4 issued by CA2. Isabella’s wallet has many identities, ID1, ID2 and
ID3, issued by CA1. Wallets can hold multiple identities for a single user, and each identity can be issued by a
different CA.
• Both Isabella and Balaji connect to PaperNet, and its MSPs determine that Isabella is a member of the Magne-
toCorp organization, and Balaji is a member of the DigiBank organization, because of the respective CAs that
issued their identities. (It is possible for an organization to use multiple CAs, and for a single CA to support
multiple organizations.)
• Isabella can use ID1 to connect to both PaperNet and BondNet. In both cases, when Isabella uses this identity,
she is recognized as a member of MangetoCorp.
• Isabella can use ID2 to connect to BondNet, in which case she is identified as an administrator of MagnetoCorp.
This gives Isabella two very different privileges: ID1 identifies her as a simple member of MagnetoCorp who
can read and write to the BondNet ledger, whereas ID2 identities her as a MagnetoCorp administrator who can
add a new organization to BondNet.
• Balaji cannot connect to BondNet with ID4. If he tried to connect, ID4 would not be recognized as belonging
to DigiBank because CA2 is not known to BondNet’s MSP.
Types
There are different types of wallets according to where they store their identities:
The four different types of wallet: File system, In-memory, Hardware Security Module (HSM) and CouchDB.
• FileSystem: This is the most common place to store wallets; file systems are pervasive, easy to understand, and
can be network mounted. They are a good default choice for wallets.
Use the FileSystemWallet class to manage file system wallets.
• In-memory: A wallet in application storage. Use this type of wallet when your application is running in a
constrained environment without access to a file system; typically a web browser. It’s worth remembering that
this type of wallet is volatile; identities will be lost after the application ends normally or crashes.
Use the InMemoryWallet class to manage in-memory wallets.
• Hardware Security Module: A wallet stored in an HSM. This ultra-secure, tamper-proof device stores digital
identity information, particularly private keys. HSMs can be locally attached to your computer or network
accessible. Most HSMs provide the ability to perform on-board encryption with private keys, such that the
private key never leave the HSM.
Currently you should use the FileSystemWallet class in combination with the HSMWalletMixin class to
manage HSM wallets.
• CouchDB: A wallet stored in Couch DB. This is the rarest form of wallet storage, but for those users who want
to use the database back-up and restore mechanisms, CouchDB wallets can provide a useful option to simplify
disaster recovery.
Use the CouchDBWallet class to manage CouchDB wallets.
Structure
A single wallet can hold multiple identities, each issued by a particular Certificate Authority. Each identity has a
standard structure comprising a descriptive label, an X.509 certificate containing a public key, a private key, and some
Fabric-specific metadata. Different wallet types map this structure appropriately to their storage mechanism.
A Fabric wallet can hold multiple identities with certificates issued by a different Certificate Authority. Identities
comprise certificate, private key and Fabric metadata.
There’s a couple of key class methods that make it easy to manage wallets and identities:
See how the X509WalletMixin.createIdentity() method creates an identity that has metadata
Org1MSP, a certificate and a private key. See how wallet.import() adds this identity to the wallet
with a particular identityLabel.
The Gateway class only requires the mspid metadata to be set for an identity – Org1MSP in the above example. It
currently uses this value to identify particular peers from a connection profile, for example when a specific notification
strategy is requested. In the DigiBank gateway file networkConnection.yaml, see how Org1MSP notifications
will be associated with peer0.org1.example.com:
organizations:
Org1:
mspid: Org1MSP
peers:
- peer0.org1.example.com
You really don’t need to worry about the internal structure of the different wallet types, but if you’re interested, navigate
to a user identity folder in the commercial paper sample:
magnetocorp/identity/user/isabella/
wallet/
[email protected]/
[email protected]
c75bd6911aca8089...-priv
c75bd6911aca8089...-pub
You can examine these files, but as discussed, it’s easier to use the SDK to manipulate these data.
Operations
The different wallet classes are derived from a common Wallet base class which provides a standard set of APIs to
manage identities. It means that applications can be made independent of the underlying wallet storage mechanism;
for example, File system and HSM wallets are handled in a very similar way.
Wallets follow a lifecycle: they can be created or opened, and identities can be read, added, deleted and exported.
An application can use a wallet according to a simple lifecycle. Wallets can be opened or created, and subsequently
identities can be added, read, updated, deleted and exported. Spend a little time on the different Wallet methods in
the JSDOC to see how they work; the commercial paper tutorial provides a nice example in addToWallet.js:
Notice how:
• When the program is first run, a wallet is created on the local file system at .../isabella/wallet.
• a certificate cert and private key are loaded from the file system.
• a new identity is created with cert, key and Org1MSP using X509WalletMixin.createIdentity().
• the new identity is imported to the wallet with wallet.import() with a label [email protected].
com.
That’s everything you need to know about wallets. You’ve seen how they hold identities that are used by applications
on behalf of users to access Fabric network resources. There are different types of wallets available depending on your
application and security needs, and a simple set of APIs to help applications manage wallets and the identities within
them.
6.7.9 Gateway
Scenario
A Hyperledger Fabric network channel can constantly change. The peer, orderer and CA components, contributed by
the different organizations in the network, will come and go. Reasons for this include increased or reduced business
demand, and both planned and unplanned outages. A gateway relieves an application of this burden, allowing it to
focus on the business problem it is trying to solve.
A MagnetoCorp and DigiBank applications (issue and buy) delegate their respective network interactions to their
gateways. Each gateway understands the network channel topology comprising the multiple peers and orderers of two
organizations MagnetoCorp and DigiBank, leaving applications to focus on business logic. Peers can talk to each
other both within and across organizations using the gossip protocol.
A gateway can be used by an application in two different ways:
• Static: The gateway configuration is completely defined in a connection profile. All the peers, orderers and CAs
available to an application are statically defined in the connection profile used to configure the gateway. For
peers, this includes their role as an endorsing peer or event notification hub, for example. You can read more
about these roles in the connection profile topic.
The SDK will use this static topology, in conjunction with gateway connection options, to manage the transaction
submission and notification processes. The connection profile must contain enough of the network topology to
allow a gateway to interact with the network on behalf of the application; this includes the network channels,
organizations, orderers, peers and their roles.
• Dynamic: The gateway configuration is minimally defined in a connection profile. Typically, one or two peers
from the application’s organization are specified, and they use service discovery to discover the available net-
work topology. This includes peers, orderers, channels, instantiated smart contracts and their endorsement
policies. (In production environments, a gateway configuration should specify at least two peers for availabil-
ity.)
The SDK will use all of the static and discovered topology information, in conjunction with gateway connection
options, to manage the transaction submission and notification processes. As part of this, it will also intelligently
use the discovered topology; for example, it will calculate the minimum required endorsing peers using the
discovered endorsement policy for the smart contract.
You might ask yourself whether a static or dynamic gateway is better? The trade-off is between predictability and
responsiveness. Static networks will always behave the same way, as they perceive the network as unchanging. In
this sense they are predictable – they will always use the same peers and orderers if they are available. Dynamic
networks are more responsive as they understand how the network changes – they can use newly added peers and
orderers, which brings extra resilience and scalability, at potentially some cost in predictability. In general it’s fine to
use dynamic networks, and indeed this the default mode for gateways.
Note that the same connection profile can be used statically or dynamically. Clearly, if a profile is going to be used
statically, it needs to be comprehensive, whereas dynamic usage requires only sparse population.
Both styles of gateway are transparent to the application; the application program design does not change whether
static or dynamic gateways are used. This also means that some applications may use service discovery, while others
may not. In general using dynamic discovery means less definition and more intelligence by the SDK; it is the default.
Connect
When an application connects to a gateway, two options are provided. These are used in subsequent SDK processing:
• Connection profile: connectionProfile is the gateway configuration that will be used for transaction
processing by the SDK, whether statically or dynamically. It can be specified in YAML or JSON, though it must
be converted to a JSON object when passed to the gateway:
Static
Static gateways define a fixed view of a network. In the MagnetoCorp scenario, a gateway might identify a single peer
from MagnetoCorp, a single peer from DigiBank, and a MagentoCorp orderer. Alternatively, a gateway might define
all peers and orderers from MagnetCorp and DigiBank. In both cases, a gateway must define a view of the network
sufficient to get commercial paper transactions endorsed and distributed.
Applications can use a gateway statically by explicitly specifying the connect option discovery: {
enabled:false } on the gateway.connect() API. Alternatively, the environment variable setting
FABRIC_SDK_DISCOVERY=false will always override the application choice.
Examine the connection profile used by the MagnetoCorp issue application. See how all the peers, orderers and even
CAs are specified in this file, including their roles.
It’s worth bearing in mind that a static gateway represents a view of a network at a moment in time. As networks
change, it may be important to reflect this in a change to the gateway file. Applications will automatically pick up
these changes when they re-load the gateway file.
Dynamic
Dynamic gateways define a small, fixed starting point for a network. In the MagnetoCorp scenario, a dynamic gateway
might identify just a single peer from MagnetoCorp; everything else will be discovered! (To provide resiliency, it might
be better to define two such bootstrap peers.)
If service discovery is selected by an application, the topology defined in the gateway file is augmented with that
produced by this process. Service discovery starts with the gateway definition, and finds all the connected peers and
orderers within the MagnetoCorp organization using the gossip protocol. If anchor peers have been defined for a
channel, then service discovery will use the gossip protocol across organizations to discover components within the
connected organization. This process will also discover smart contracts installed on peers and their endorsement poli-
cies defined at a channel level. As with static gateways, the discovered network must be sufficient to get commercial
paper transactions endorsed and distributed.
Dynamic gateways are the default setting for Fabric applications. They can be explicitly specified using the connect
option discovery: { enabled:true } on the gateway.connect() API. Alternatively, the environment
variable setting FABRIC_SDK_DISCOVERY=true will always override the application choice.
A dynamic gateway represents an up-to-date view of a network. As networks change, service discovery will ensure that
the network view is an accurate reflection of the topology visible to the application. Applications will automatically
pick up these changes; they do not even need to re-load the gateway file.
Multiple gateways
Finally, it is straightforward for an application to define multiple gateways, both for the same or different networks.
Moreover, applications can use the name gateway both statically and dynamically.
It can be helpful to have multiple gateways. Here are a few reasons:
• Handling requests on behalf of different users.
• Connecting to different networks simultaneously.
• Testing a network configuration, by simultaneously comparing its behaviour with an existing configuration.
This topic covers how to develop a client application and smart contract to solve a business problem using Hyperledger
Fabric. In a real world Commercial Paper scenario, involving multiple organizations, you’ll learn about all the
concepts and tasks required to accomplish this goal. We assume that the blockchain network is already available.
The topic is designed for multiple audiences:
• Solution and application architect
• Client application developer
• Smart contract developer
• Business professional
You can chose to read the topic in order, or you can select individual sections as appropriate. Individual topic sections
are marked according to reader relevance, so whether you’re looking for business or technical information it’ll be clear
when a topic is for you.
The topic follows a typical software development lifecycle. It starts with business requirements, and then covers all
the major technical activities required to develop an application and smart contract to meet these requirements.
If you’d prefer, you can try out the commercial paper scenario immediately, following an abbreviated explanation, by
running the commercial paper tutorial. You can return to this topic when you need fuller explanations of the concepts
introduced in the tutorial.
Tutorials
We offer tutorials to get you started with Hyperledger Fabric. The first is oriented to the Hyperledger Fabric appli-
cation developer, Writing Your First Application. It takes you through the process of writing your first blockchain
application for Hyperledger Fabric using the Hyperledger Fabric Node SDK.
The second tutorial is oriented towards the Hyperledger Fabric network operators, Building Your First Network. This
one walks you through the process of establishing a blockchain network using Hyperledger Fabric and provides a basic
sample application to test it out.
There are also tutorials for updating your channel, Adding an Org to a Channel, and upgrading your network to a later
version of Hyperledger Fabric, Upgrading Your Network Components.
Finally, we offer two chaincode tutorials. One oriented to developers, Chaincode for Developers, and the other oriented
to operators, Chaincode for Operators.
Note: If you have questions not addressed by this documentation, or run into issues with any of the tutorials, please
visit the Still Have Questions? page for some tips on where to find additional help.
Note: If you’re not yet familiar with the fundamental architecture of a Fabric network, you may want to visit the Key
Concepts section prior to continuing.
It is also worth noting that this tutorial serves as an introduction to Fabric applications and uses simple smart contracts
and applications. For a more in-depth look at Fabric applications and smart contracts, check out our Developing
Applications section or the Commercial paper tutorial.
In this tutorial we’ll be looking at a handful of sample programs to see how Fabric apps work. These applications
and the smart contracts they use are collectively known as FabCar. They provide a great starting point to understand
a Hyperledger Fabric blockchain. You’ll learn how to write an application and smart contract to query and update a
143
hyperledger-fabricdocs Documentation, Release master
ledger, and how to use a Certificate Authority to generate the X.509 certificates used by applications which interact
with a permissioned blockchain.
We will use the application SDK — described in detail in the Application topic – to invoke a smart contract which
queries and updates the ledger using the smart contract SDK — described in detail in section Smart Contract Process-
ing.
We’ll go through three principle steps:
1. Setting up a development environment. Our application needs a network to interact with, so we’ll
get a basic network our smart contracts and application will use.
2. Learning about a sample smart contract, FabCar. We use a smart contract written in JavaScript.
We’ll inspect the smart contract to learn about the transactions within them, and how they are used by
applications to query and update the ledger.
3. Develop a sample application which uses FabCar. Our application will use the FabCar smart contract
to query and update car assets on the ledger. We’ll get into the code of the apps and the transactions they
create, including querying a car, querying a range of cars, and creating a new car.
After completing this tutorial you should have a basic understanding of how an application is programmed in conjunc-
tion with a smart contract to interact with the ledger hosted and replicated on the peers in a Fabric network.
Note: These applications are also compatible with Service Discovery and Private data, though we won’t explicitly
show how to use our apps to leverage those features.
Note: This next section requires you to be in the first-network subdirectory within your local clone of the
fabric-samples repo.
If you’ve already run through Building Your First Network, you will have downloaded fabric-samples and have
a network up and running. Before you run this tutorial, you must stop this network:
./byfn.sh down
If you have run through this tutorial before, use the following commands to kill any stale or active containers. Note,
this will take down all of your containers whether they’re Fabric related or not.
If you don’t have a development environment and the accompanying artifacts for the network and applications, visit
the Prerequisites page and ensure you have the necessary dependencies installed on your machine.
Next, if you haven’t done so already, visit the Install Samples, Binaries and Docker Images page and follow the pro-
vided instructions. Return to this tutorial once you have cloned the fabric-samples repository, and downloaded
the latest stable Fabric images and available utilities.
If you are using Mac OS and running Mojave, you will need to install Xcode.
Note: This next section requires you to be in the fabcar subdirectory within your local clone of the
fabric-samples repo.
Launch your network using the startFabric.sh shell script. This command will spin up a blockchain network
comprising peers, orderers, certificate authorities and more. It will also install and instantiate a javascript version of
the FabCar smart contract which will be used by our application to access the ledger. We’ll learn more about these
components as we go through the tutorial.
./startFabric.sh javascript
Alright, you’ve now got a sample network up and running, and the FabCar smart contract installed and instantiated.
Let’s install our application pre-requisites so that we can try it out, and see how everything works together.
Note: The following instructions require you to be in the fabcar/javascript subdirectory within your local
clone of the fabric-samples repo.
Run the following command to install the Fabric dependencies for the applications. It will take about a minute to
complete:
npm install
This process is installing the key application dependencies defined in package.json. The most important of which
is the fabric-network class; it enables an application to use identities, wallets, and gateways to connect to chan-
nels, submit transactions, and wait for notifications. This tutorial also uses the fabric-ca-client class to enroll
users with their respective certificate authorities, generating a valid identity which is then used by fabric-network
class methods.
Once npm install completes, everything is in place to run the application. For this tutorial, you’ll primarily be
using the application JavaScript files in the fabcar/javascript directory. Let’s take a look at what’s inside:
ls
There are files for other program languages, for example in the fabcar/typescript directory. You can read these
once you’ve used the JavaScript example – the principles are the same.
If you are using Mac OS and running Mojave, you will need to install Xcode.
Note: The following two sections involve communication with the Certificate Authority. You may find it useful
to stream the CA logs when running the upcoming programs by opening a new terminal shell and running docker
logs -f ca.example.com.
When we created the network, an admin user — literally called admin — was created as the registrar for the
certificate authority (CA). Our first step is to generate the private key, public key, and X.509 certificate for admin
using the enroll.js program. This process uses a Certificate Signing Request (CSR) — the private and public
key are first generated locally and the public key is then sent to the CA which returns an encoded certificate for use by
the application. These three credentials are then stored in the wallet, allowing us to act as an administrator for the CA.
We will subsequently register and enroll a new application user which will be used by our application to interact with
the blockchain.
Let’s enroll user admin:
node enrollAdmin.js
This command has stored the CA administrator’s credentials in the wallet directory.
Now that we have the administrator’s credentials in a wallet, we can enroll a new user — user1 — which will be
used to query and update the ledger:
node registerUser.js
Similar to the admin enrollment, this program uses a CSR to enroll user1 and store its credentials alongside those of
admin in the wallet. We now have identities for two separate users — admin and user1 — and these are used by
our application.
Time to interact with the ledger. . .
Each peer in a blockchain network hosts a copy of the ledger, and an application program can query the ledger by
invoking a smart contract which queries the most recent value of the ledger and returns it to the application.
Here is a simplified representation of how a query works:
Applications read data from the ledger using a query. The most common queries involve the current values of data in
the ledger – its world state. The world state is represented as a set of key-value pairs, and applications can query data
for a single key or multiple keys. Moreover, the ledger world state can be configured to use a database like CouchDB
which supports complex queries when key-values are modeled as JSON data. This can be very helpful when looking
for all assets that match certain keywords with particular values; all cars with a particular owner, for example.
First, let’s run our query.js program to return a listing of all the cars on the ledger. This program uses our second
identity – user1 – to access the ledger:
node query.js
{"Key":"CAR1", "Record":{"colour":"red","make":"Ford","model":"Mustang","owner":"Brad
˓→"}},
{"Key":"CAR2", "Record":{"colour":"green","make":"Hyundai","model":"Tucson","owner":
˓→"Jin Soo"}},
{"Key":"CAR3", "Record":{"colour":"yellow","make":"Volkswagen","model":"Passat","owner
˓→":"Max"}},
{"Key":"CAR4", "Record":{"colour":"black","make":"Tesla","model":"S","owner":"Adriana
˓→"}},
{"Key":"CAR5", "Record":{"colour":"purple","make":"Peugeot","model":"205","owner":
˓→"Michel"}},
{"Key":"CAR6", "Record":{"colour":"white","make":"Chery","model":"S22L","owner":"Aarav
˓→"}},
{"Key":"CAR7", "Record":{"colour":"violet","make":"Fiat","model":"Punto","owner":"Pari
˓→"}},
{"Key":"CAR8", "Record":{"colour":"indigo","make":"Tata","model":"Nano","owner":
˓→"Valeria"}},
{"Key":"CAR9", "Record":{"colour":"brown","make":"Holden","model":"Barina","owner":
˓→"Shotaro"}}]
Let’s take a closer look at this program. Use an editor (e.g. atom or visual studio) and open query.js.
The application starts by bringing in scope two key classes from the fabric-network module;
FileSystemWallet and Gateway. These classes will be used to locate the user1 identity in the wallet, and use
it to connect to the network:
const { FileSystemWallet, Gateway } = require('fabric-network');
This code creates a new gateway and then uses it to connect the application to the network. ccp describes the network
that the gateway will access with the identity user1 from wallet. See how the ccp has been loaded from ../..
/basic-network/connection.json and parsed as a JSON file:
const ccpPath = path.resolve(__dirname, '..', '..', 'basic-network', 'connection.json
˓→');
If you’d like to understand more about the structure of a connection profile, and how it defines the network, check out
the connection profile topic.
A network can be divided into multiple channels, and the next important line of code connects the application to a
particular channel within the network, mychannel:
Within this channel, we can access the smart contract fabcar to interact with the ledger:
const contract = network.getContract('fabcar');
Within fabcar there are many different transactions, and our application initially uses the queryAllCars trans-
action to access the ledger world state data:
const result = await contract.evaluateTransaction('queryAllCars');
The evaluateTransaction method represents one of the simplest interaction with a smart contract in blockchain
network. It simply picks a peer defined in the connection profile and sends the request to it, where it is evaluated. The
smart contract queries all the cars on the peer’s copy of the ledger and returns the result to the application. This
interaction does not result in an update the ledger.
Let’s take a look at the transactions within the FabCar smart contract. Navigate to the chaincode/fabcar/
javascript/lib subdirectory at the root of fabric-samples and open fabcar.js in your editor.
See how our smart contract is defined using the Contract class:
class FabCar extends Contract {...
Within this class structure, you’ll see that we have the following transactions defined: initLedger, queryCar,
queryAllCars, createCar, and changeCarOwner. For example:
async queryCar(ctx, carNumber) {...}
async queryAllCars(ctx) {...}
Let’s take a closer look at the queryAllCars transaction to see how it interacts with the ledger.
async queryAllCars(ctx) {
This code defines the range of cars that queryAllCars will retrieve from the ledger. Every car between CAR0
and CAR999 – 1,000 cars in all, assuming every key has been tagged properly – will be returned by the query. The
remainder of the code iterates through the query results and packages them into JSON for the application.
Below is a representation of how an application would call different transactions in a smart contract. Each transaction
uses a broad set of APIs such as getStateByRange to interact with the ledger. You can read more about these
APIs in detail.
We can see our queryAllCars transaction, and another called createCar. We will use this later in the tutorial to
update the ledger, and add a new block to the blockchain.
But first, go back to the query program and change the evaluateTransaction request to query CAR4. The
query program should now look like this:
Save the program and navigate back to your fabcar/javascript directory. Now run the query program again:
node query.js
If you go back and look at the result from when the transaction was queryAllCars, you can see that CAR4 was
Adriana’s black Tesla model S, which is the result that was returned here.
We can use the queryCar transaction to query against any car, using its key (e.g. CAR0) and get whatever make,
model, color, and owner correspond to that car.
Great. At this point you should be comfortable with the basic query transactions in the smart contract and the handful
of parameters in the query program.
Time to update the ledger. . .
Now that we’ve done a few ledger queries and added a bit of code, we’re ready to update the ledger. There are a lot of
potential updates we could make, but let’s start by creating a new car.
From an application perspective, updating the ledger is simple. An application submits a transaction to the blockchain
network, and when it has been validated and committed, the application receives a notification that the transaction
has been successful. Under the covers this involves the process of consensus whereby the different components of
the blockchain network work together to ensure that every proposed update to the ledger is valid and performed in an
agreed and consistent order.
Above, you can see the major components that make this process work. As well as the multiple peers which each
host a copy of the ledger, and optionally a copy of the smart contract, the network also contains an ordering service.
The ordering service coordinates transactions for a network; it creates blocks containing transactions in a well-defined
sequence originating from all the different applications connected to the network.
Our first update to the ledger will create a new car. We have a separate program called invoke.js that we will use
to make updates to the ledger. Just as with queries, use an editor to open the program and navigate to the code block
where we construct our transaction and submit it to the network:
See how the applications calls the smart contract transaction createCar to create a black Honda Accord with an
owner named Tom. We use CAR12 as the identifying key here, just to show that we don’t need to use sequential keys.
Save it and run the program:
node invoke.js
˓→"9076cd4279a71ecf99665aed0ed3590a25bba040fa6b4dd6d010f42bb26ff5d1"
Notice how the invoke application interacted with the blockchain network using the submitTransaction API,
rather than evaluateTransaction.
submitTransaction is much more sophisticated than evaluateTransaction. Rather than interacting with
a single peer, the SDK will send the submitTransaction proposal to every required organization’s peer in the
blockchain network. Each of these peers will execute the requested smart contract using this proposal, to generate a
transaction response which it signs and returns to the SDK. The SDK collects all the signed transaction responses into
a single transaction, which it then sends to the orderer. The orderer collects and sequences transactions from every
application into a block of transactions. It then distributes these blocks to every peer in the network, where every
transaction is validated and committed. Finally, the SDK is notified, allowing it to return control to the application.
submitTransaction does all this for the application! The process by which the application, smart contract, peers
and ordering service work together to keep the ledger consistent across the network is called consensus, and it is
explained in detail in this section.
To see that this transaction has been written to the ledger, go back to query.js and change the argument from CAR4
to CAR12.
In other words, change this:
To this:
node query.js
Congratulations. You’ve created a car and verified that its recorded on the ledger!
So now that we’ve done that, let’s say that Tom is feeling generous and he wants to give his Honda Accord to someone
named Dave.
To do this, go back to invoke.js and change the smart contract transaction from createCar to
changeCarOwner with a corresponding change in input arguments:
The first argument — CAR12 — identifies the car that will be changing owners. The second argument — Dave —
defines the new owner of the car.
Save and execute the program again:
node invoke.js
Now let’s query the ledger again and ensure that Dave is now associated with the CAR12 key:
node query.js
Note: In a real world application the smart contract would likely have some access control logic. For example, only
certain authorized users may create new cars, and only the car owner may transfer the car to somebody else.
7.1.7 Summary
Now that we’ve done a few queries and a few updates, you should have a pretty good sense of how applications interact
with a blockchain network using a smart contract to query or update the ledger. You’ve seen the basics of the roles
smart contracts, APIs, and the SDK play in queries and updates and you should have a feel for how different kinds of
applications could be used to perform other business tasks and operations.
As we said in the introduction, we have a whole section on Developing Applications that includes in-depth information
on smart contracts, process and data design, a tutorial using a more in-depth Commercial Paper tutorial and a large
amount of other material relating to the development of applications.
In this tutorial two organizations, MagnetoCorp and DigiBank, trade commercial paper with each other using Paper-
Net, a Hyperledger Fabric blockchain network.
Once you’ve set up a basic network, you’ll act as Isabella, an employee of MagnetoCorp, who will issue a commercial
paper on its behalf. You’ll then switch hats to take the role of Balaji, an employee of DigiBank, who will buy this
commercial paper, hold it for a period of time, and then redeem it with MagnetoCorp for a small profit.
You’ll act as an developer, end user, and administrator, each in different organizations, performing the following steps
designed to help you understand what it’s like to collaborate as two different organizations working independently, but
according to mutually agreed rules in a Hyperledger Fabric network.
• Set up machine and download samples
• Create a network
• Understand the structure of a smart contract
• Work as an organization, MagnetoCorp, to install and instantiate smart contract
• Understand the structure of a MagnetoCorp application, including its dependencies
• Configure and use a wallet and identities
• Run a MagnetoCorp application to issue a commercial paper
• Understand how a second organization, Digibank, uses the smart contract in their applications
• As Digibank, run applications that buy and redeem commercial paper
This tutorial has been tested on MacOS and Ubuntu, and should work on other Linux distributions. A Windows version
is under development.
7.2.1 Prerequisites
Before you start, you must install some prerequisite technology required by the tutorial. We’ve kept these to a mini-
mum so that you can get going quickly.
You must have the following technologies installed:
• Node version 8.9.0, or higher. Node is a JavaScript runtime that you can use to run applications and smart
contracts. You are recommended to use the LTS (Long Term Support) version of node. Install node here.
• Docker version 18.06, or higher. Docker help developers and administrators create standard environments for
building and running applications and smart contracts. Hyperledger Fabric is provided as a set of Docker images,
and the PaperNet smart contract will run in a docker container. Install Docker here.
You will find it helpful to install the following technologies:
• A source code editor, such as Visual Studio Code version 1.28, or higher. VS Code will help you develop and
test your application and smart contract. Install VS Code here.
Many excellent code editors are available including Atom, Sublime Text and Brackets.
You may find it helpful to install the following technologies as you become more experienced with application and
smart contract development. There’s no requirement to install these when you first run the tutorial:
• Node Version Manager. NVM helps you easily switch between different versions of node – it can be really
helpful if you’re working on multiple projects at the same time. Install NVM here.
The commercial paper tutorial is one of the Hyperledger Fabric samples held in a public GitHub repository called
fabric-samples. As you’re going to run the tutorial on your machine, your first task is to download the
fabric-samples repository.
$ env
...
GOPATH=/Users/username/go
NVM_BIN=/Users/username/.nvm/versions/node/v8.11.2/bin
NVM_IOJS_ORG_MIRROR=https://iojs.org/dist
...
$ mkdir -p $GOPATH/src/github.com/hyperledger/
$ cd $GOPATH/src/github.com/hyperledger/
Use the git clone command to copy fabric-samples repository to this location:
$ cd fabric-samples
$ ls
(isabella)$ ls
indicates that you should run the ls command from Isabella’s window.
The tutorial currently uses the basic network; it will be updated soon to a configuration which better reflects the multi-
organization structure of PaperNet. For now, this network is sufficient to show you how to develop an application and
smart contract.
The Hyperledger Fabric basic network comprises a peer and its ledger database, an orderer and a certificate authority
(CA). Each of these components runs as a docker container.
The peer, its ledger, the orderer and the CA each run in the their own docker container. In production environments,
organizations typically use existing CAs that are shared with other systems; they’re not dedicated to the Fabric network.
You can manage the basic network using the commands and configuration included in the
fabric-samples\basic-network directory. Let’s start the network on your local machine with the
start.sh shell script:
$ cd fabric-samples/basic-network
$ ./start.sh
$ docker ps
See if you can map these containers to the basic-network (you may need to horizontally scroll to locate the informa-
tion):
• A peer peer0.org1.example.com is running in container ada3d078989b
• An orderer orderer.example.com is running in container 1fa1fd107bfb
• A CouchDB database couchdb is running in container 53fe614274f7
• A CA ca.example.com is running in container 469201085a20
These containers all form a docker network called net_basic. You can view the network with the docker
network command:
{
"Name": "net_basic",
"Id": "62e9d37d00a0eda6c6301a76022c695f8e01258edaba6f65e876166164466ee5",
"Created": "2018-11-07T13:46:30.4992927Z",
"Containers": {
"1fa1fd107bfbe61522e4a26a57c2178d82b2918d5d423e7ee626c79b8a233624": {
"Name": "orderer.example.com",
(continues on next page)
See how the four containers use different IP addresses, while being part of a single docker network. (We’ve abbreviated
the output for clarity.)
To recap: you’ve downloaded the Hyperledger Fabric samples repository from GitHub and you’ve got the basic net-
work running on your local machine. Let’s now start to play the role of MagnetoCorp, who wish to trade commercial
paper.
To monitor the MagnetoCorp components of PaperNet, an administrator can view the aggregated output from a set of
docker containers using the logspout tool. It collects the different output streams into one place, making it easy
to see what’s happening from a single window. This can be really helpful for administrators when installing smart
contracts or for developers when invoking smart contracts, for example.
Let’s now monitor PaperNet as a MagnetoCorp administrator. Open a new window in the fabric-samples di-
rectory, and locate and run the monitordocker.sh script to start the logspout tool for the PaperNet docker
containers associated with the docker network net_basic:
Note that you can pass a port number to the above command if the default port in monitordocker.sh is already
in use.
This window will now show output from the docker containers, so let’s start another terminal window which will allow
the MagnetoCorp administrator to interact with the network.
Again, see how the hyperledger/fabric-tools docker image was retrieved from Docker Hub and added to
the network:
The MagnetoCorp administrator will use the command line in container 562a88b25149 to interact with PaperNet.
Notice also the logspout container b7f3586e5d02; this is capturing the output of all other docker containers for
the monitordocker.sh command.
Let’s now use this command line to interact with PaperNet as the MagnetoCorp administrator.
issue, buy and redeem are the three functions at the heart of the PaperNet smart contract. It is used by applications
to submit transactions which correspondingly issue, buy and redeem commercial paper on the ledger. Our next task is
to examine this smart contract.
Open a new terminal window to represent a MagnetoCorp developer and change to the directory that contains Magne-
toCorp’s copy of the smart contract to view it with your chosen editor (VS Code in this tutorial):
(magnetocorp developer)$ cd commercial-paper/organization/magnetocorp/contract
(magnetocorp developer)$ code .
In the lib directory of the folder, you’ll see papercontract.js file – this contains the commercial paper smart
contract!
An example code editor displaying the commercial paper smart contract in papercontract.js
papercontract.js is a JavaScript program designed to run in the node.js environment. Note the following key
program lines:
• const { Contract, Context } = require('fabric-contract-api');
This statement brings into scope two key Hyperledger Fabric classes that will be used extensively by the smart
contract – Contract and Context. You can learn more about these classes in the fabric-shim JSDOCS.
Before papercontract can be invoked by applications, it must be installed onto the appropriate peer nodes in
PaperNet. MagnetoCorp and DigiBank administrators are able to install papercontract onto peers over which
they respectively have authority.
The MagnetoCorp administrator uses the peer chaincode install command to copy the papercontract
smart contract from their local machine’s file system to the file system within the target peer’s docker container.
Once the smart contract is installed on the peer and instantiated on a channel, papercontract can be invoked by
applications, and interact with the ledger database via the putState() and getState() Fabric APIs. Examine how these
APIs are used by StateList class within ledger-api\statelist.js.
Let’s now install papercontract as the MagnetoCorp administrator. In the MagnetoCorp administrator’s com-
mand window, use the docker exec command to run the peer chaincode install command in the
cliMagnetCorp container:
2018-11-07 14:21:48.466 UTC [chaincodeCmd] install -> INFO 003 Installed remotely
˓→response:<status:200 payload:"OK" >
volumes:
- ...
- ./../../../../organization/magnetocorp:/opt/gopath/src/github.com/
- ...
Now that papercontract chaincode containing the CommercialPaper smart contract is installed on the re-
quired PaperNet peers, an administrator can make it available to different network channels, so that it can be invoked
by applications connected to those channels. Because we’re using the basic network configuration for PaperNet, we’re
only going to make papercontract available in a single network channel, mychannel.
A MagnetoCorp administrator instantiates papercontract chaincode containing the smart contract. A new docker
chaincode container will be created to run papercontract.
The MagnetoCorp administrator uses the peer chaincode instantiate command to instantiate
papercontract on mychannel:
One of the most important parameters on instantiate is -P. It specifies the endorsement policy for
papercontract, describing the set of organizations that must endorse (execute and sign) a transaction before it
can be determined as valid. All transactions, whether valid or invalid, will be recorded on the ledger blockchain, but
only valid transactions will update the world state.
In passing, see how instantiate passes the orderer address orderer.example.com:7050. This is because
it additionally submits an instantiate transaction to the orderer, which will include the transaction in the next block
and distribute it to all peers that have joined mychannel, enabling any peer to execute the chaincode in their own
isolated chaincode container. Note that instantiate only needs to be issued once for papercontract even
though typically it is installed on many peers.
See how a papercontract container has been started with the docker ps command:
The smart contract contained in papercontract is called by MagnetoCorp’s application issue.js. Isabella uses
this application to submit a transaction to the ledger which issues commercial paper 00001. Let’s quickly examine
how the issue application works.
A gateway allows an application to focus on transaction generation, submission and response. It coordinates transac-
tion proposal, ordering and notification processing between the different network components.
Because the issue application submits transactions on behalf of Isabella, it starts by retrieving Isabella’s X.509
certificate from her wallet, which might be stored on the local file system or a Hardware Security Module HSM. The
issue application is then able to utilize the gateway to submit transactions on the channel. The Hyperledger Fabric
SDK provides a gateway abstraction so that applications can focus on application logic while delegating network
interaction to the gateway. Gateways and wallets make it straightforward to write Hyperledger Fabric applications.
So let’s examine the issue application that Isabella is going to use. open a separate terminal window for her, and in
fabric-samples locate the MagnetoCorp /application folder:
addToWallet.js is the program that Isabella is going to use to load her identity into her wallet, and issue.js
will use this identity to create commercial paper 00001 on behalf of MagnetoCorp by invoking papercontract.
Change to the directory that contains MagnetoCorp’s copy of the application issue.js, and use your code editor to
examine it:
Examine this directory; it contains the issue application and all its dependencies.
A code editor displaying the contents of the commercial paper application directory.
Note the following key program lines in issue.js:
• const { FileSystemWallet, Gateway } = require('fabric-network');
This statement brings two key Hyperledger Fabric SDK classes into scope – Wallet and Gateway. Because
Isabella’s X.509 certificate is in the local file system, the application uses FileSystemWallet.
• const wallet = new FileSystemWallet('../identity/user/isabella/wallet');
This statement identifies that the application will use isabella wallet when it connects to the blockchain
network channel. The application will select a particular identity within isabella wallet. (The wallet must
have been loaded with the Isabella’s X.509 certificate – that’s what addToWallet.js does.)
• await gateway.connect(connectionProfile, connectionOptions);
This line of code connects to the network using the gateway identified by connectionProfile, using the
identity referred to in ConnectionOptions.
See how ../gateway/networkConnection.yaml and [email protected] are used for
these values respectively.
• const network = await gateway.getNetwork('mychannel');
This connects the application to the network channel mychannel, where the papercontract was previ-
ously instantiated.
The issue.js application is written in JavaScript and designed to run in the node.js environment that acts as a client
to the PaperNet network. As is common practice, MagnetoCorp’s application is built on many external node packages
– to improve quality and speed of development. Consider how issue.js includes the js-yaml package to process
the YAML gateway connection profile, or the fabric-network package to access the Gateway and Wallet
classes:
These packages have to be downloaded from npm to the local file system using the npm install command. By
convention, packages must be installed into an application-relative /node_modules directory for use at runtime.
Examine the package.json file to see how issue.js identifies the packages to download and their exact ver-
sions:
"dependencies": {
"fabric-network": "^1.4.0-beta",
"fabric-client": "^1.4.0-beta",
"js-yaml": "^3.12.0"
},
npm versioning is very powerful; you can read more about it here.
Let’s install these packages with the npm install command – this may take up to a minute to complete:
(magnetocorp user)$ ls
Examine the node_modules directory to see the packages that have been installed. There are lots, because
js-yaml and fabric-network are themselves built on other npm packages! Helpfully, the package-lock.
json file identifies the exact versions installed, which can prove invaluable if you want to exactly reproduce environ-
ments; to test, diagnose problems or deliver proven applications for example.
7.2.10 Wallet
Isabella is almost ready to run issue.js to issue MagnetoCorp commercial paper 00001; there’s just one remaining
task to perform! As issue.js acts on behalf of Isabella, and therefore MagnetoCorp, it will use identity from her
wallet that reflects these facts. We now need to perform this one-time activity of adding appropriate X.509 credentials
to her wallet.
In Isabella’s terminal window, run the addToWallet.js program to add identity information to her wallet:
done
Isabella can store multiple identities in her wallet, though in our example, she only uses one – User1@org.
example.com. This identity is currently associated with the basic network, rather than a more realistic PaperNet
configuration – we’ll update this tutorial soon.
addToWallet.js is a simple file-copying program which you can examine at your leisure. It moves an identity
from the basic network sample to Isabella’s wallet. Let’s focus on the result of this program – the contents of the wallet
which will be used to submit transactions to PaperNet:
(isabella)$ ls ../identity/user/isabella/wallet/
See how the directory structure maps the [email protected] identity – other identities used by Isabella
would have their own folder. Within this directory you’ll find the identity information that issue.js will use on
behalf of isabella:
(isabella)$ ls ../identity/user/isabella/wallet/[email protected]
Notice:
• a private key c75bd6911a...-priv used to sign transactions on Isabella’s behalf, but not distributed outside
of her immediate control.
• a public key c75bd6911a...-pub which is cryptographically linked to Isabella’s private key. This is wholly
contained within Isabella’s X.509 certificate.
• a certificate [email protected] which contains Isabella’s public key and other X.509 attributes
added by the Certificate Authority at certificate creation. This certificate is distributed to the network so that
different actors at different times can cryptographically verify information created by Isabella’s private key.
Learn more about certificates here. In practice, the certificate file also contains some Fabric-specific metadata
such as Isabella’s organization and role – read more in the wallet topic.
Isabella can now use issue.js to submit a transaction that will issue MagnetoCorp commercial paper 00001:
The node command initializes a node.js environment, and runs issue.js. We can see from the program output
that MagnetoCorp commercial paper 00001 was issued with a face value of 5M USD.
As you’ve seen, to achieve this, the application invokes the issue transaction defined in the CommercialPaper
smart contract within papercontract.js. This had been installed and instantiated in the network by the Mag-
netoCorp administrator. It’s the smart contract which interacts with the ledger via the Fabric APIs, most notably
putState() and getState(), to represent the new commercial paper as a vector state within the world state.
We’ll see how this vector state is subsequently manipulated by the buy and redeem transactions also defined within
the smart contract.
All the time, the underlying Fabric SDK handles the transaction endorsement, ordering and notification process, mak-
ing the application’s logic straightforward; the SDK uses a gateway to abstract away network details and connec-
tionOptions to declare more advanced processing strategies such as transaction retry.
Let’s now follow the lifecycle of MagnetoCorp 00001 by switching our emphasis to DigiBank, who will buy the
commercial paper.
Now that commercial paper 00001has been issued by MagnetoCorp, let’s switch context to interact with PaperNet
as employees of DigiBank. First, we’ll act as administrator who will create a console configured to interact with
PaperNet. Then Balaji, an end user, will use Digibank’s buy application to buy commercial paper 00001, moving it
to the next stage in its lifecycle.
use applications which invoke the same smart contract as MagnetoCorp applications, though they contain Digibank-
specific logic and configuration. It’s the smart contract which captures the shared business process, and the ledger
which holds the shared business data, no matter which applications call them.
Let’s open up a separate terminal to allow the DigiBank administrator to interact with PaperNet. In
fabric-samples:
(...)
Creating cliDigiBank ... done
This docker container is now available for Digibank administrators to interact with the network:
In this tutorial, you’ll use the command line container named cliDigiBank to interact with the network on behalf
of DigiBank. We’ve not shown all the docker containers, and in the real world DigiBank users would only see the
network components (peers, orderers, CAs) to which they have access.
Digibank’s administrator doesn’t have much to do in this tutorial right now because the PaperNet network configuration
is so simple. Let’s turn our attention to Balaji.
Balaji uses DigiBank’s buy application to submit a transaction to the ledger which transfers ownership of commercial
paper 00001 from MagnetoCorp to DigiBank. The CommercialPaper smart contract is the same as that used by
MagnetoCorp’s application, however the transaction is different this time – it’s buy rather than issue. Let’s examine
how DigiBank’s application works.
Open a separate terminal window for Balaji. In fabric-samples, change to the DigiBank application directory
that contains the application, buy.js, and open it with your editor:
(balaji)$ cd commercial-paper/organization/digibank/application/
(balaji)$ code buy.js
As you can see, this directory contains both the buy and redeem applications that will be used by Balaji.
DigiBank’s commercial paper directory containing the buy.js and redeem.js applications.
DigiBank’s buy.js application is very similar in structure to MagnetoCorp’s issue.js with two important differ-
ences:
• Identity: the user is a DigiBank user Balaji rather than MagnetoCorp’s Isabella
See how the application uses the balaji wallet when it connects to the PaperNet network channel. buy.js
selects a particular identity within balaji wallet.
• Transaction: the invoked transaction is buy rather than issue
A buy transaction is submitted with the values MagnetoCorp, 00001. . . , that are used by the
CommercialPaper smart contract class to transfer ownership of commercial paper 00001 to DigiBank.
Feel free to examine other files in the application directory to understand how the application works, and read in
detail how buy.js is implemented in the application topic.
The DigiBank applications which buy and redeem commercial paper have a very similar structure to MagnetoCorp’s
issue application. Therefore, let’s install their dependencies and set up Balaji’s wallet so that he can use these applica-
tions to buy and redeem commercial paper.
Like MagnetoCorp, Digibank must the install the required application packages using the npm install command,
and again, this make take a short time to complete.
In the DigiBank administrator window, install the application dependencies:
In Balaji’s terminal window, run the addToWallet.js program to add identity information to his wallet:
done
The addToWallet.js program has added identity information for balaji, to his wallet, which will be used by
buy.js and redeem.js to submit transactions to PaperNet.
Like Isabella, Balaji can store multiple identities in his wallet, though in our example, he only uses one –
[email protected]. His corresponding wallet structure digibank/identity/user/balaji/
wallet/[email protected] contains is very similar Isabella’s – feel free to examine it.
Balaji can now use buy.js to submit a transaction that will transfer ownership of MagnetoCorp commercial paper
00001 to DigiBank.
Run the buy application in Balaji’s window:
You can see the program output that MagnetoCorp commercial paper 00001 was successfully purchased by Balaji on
behalf of DigiBank. buy.js invoked the buy transaction defined in the CommercialPaper smart contract which
updated commercial paper 00001 within the world state using the putState() and getState() Fabric APIs.
As you’ve seen, the application logic to buy and issue commercial paper is very similar, as is the smart contract logic.
The final transaction in the lifecycle of commercial paper 00001 is for DigiBank to redeem it with MagnetoCorp.
Balaji uses redeem.js to submit a transaction to perform the redeem logic within the smart contract.
Run the redeem transaction in Balaji’s window:
Again, see how the commercial paper 00001 was successfully redeemed when redeem.js invoked the redeem
transaction defined in CommercialPaper. Again, it updated commercial paper 00001 within the world state to
reflect that the ownership returned to MagnetoCorp, the issuer of the paper.
To understand how applications and smart contracts shown in this tutorial work in more detail, you’ll find it helpful
to read Developing Applications. This topic will give you a fuller explanation of the commercial paper scenario, the
PaperNet business network, its actors, and how the applications and smart contracts they use work in detail.
Also feel free to use this sample to start creating your own applications and smart contracts!
Note: These instructions have been verified to work against the latest stable Docker images and the pre-compiled
setup utilities within the supplied tar file. If you run these commands with images or tools from the current master
branch, it is possible that you will see configuration and panic errors.
The build your first network (BYFN) scenario provisions a sample Hyperledger Fabric network consisting of two
organizations, each maintaining two peer nodes, and a “solo” ordering service.
Before we begin, if you haven’t already done so, you may wish to check that you have all the Prerequisites installed
on the platform(s) on which you’ll be developing blockchain applications and/or operating Hyperledger Fabric.
You will also need to Install Samples, Binaries and Docker Images. You will notice that there are a number of samples
included in the fabric-samples repository. We will be using the first-network sample. Let’s open that
sub-directory now.
cd fabric-samples/first-network
Note: The supplied commands in this documentation MUST be run from your first-network sub-directory of
the fabric-samples repository clone. If you elect to run the commands from a different location, the various
provided scripts will be unable to find the binaries.
We provide a fully annotated script - byfn.sh - that leverages these Docker images to quickly bootstrap a Hyper-
ledger Fabric network comprised of 4 peers representing two different organizations, and an orderer node. It will also
launch a container to run a scripted execution that will join peers to a channel, deploy and instantiate chaincode and
drive execution of transactions against the deployed chaincode.
Here’s the help text for the byfn.sh script:
Usage:
byfn.sh <mode> [-c <channel name>] [-t <timeout>] [-d <delay>] [-f <docker-compose-
˓→file>] [-s <dbtype>] [-l <language>] [-i <imagetag>] [-v]
If you choose not to supply a channel name, then the script will use a default name of mychannel. The CLI timeout
parameter (specified with the -t flag) is an optional value; if you choose not to set it, then the CLI will give up on query
requests made after the default setting of 10 seconds.
You will see a brief description as to what will occur, along with a yes/no command line prompt. Respond with a y or
hit the return key to execute the described action.
Generating certs and genesis block for with channel 'mychannel' and CLI timeout of '10
˓→'
Continue? [Y/n] y
proceeding ...
/Users/xxx/dev/fabric-samples/bin/cryptogen
##########################################################
##### Generate certificates using cryptogen tool #########
##########################################################
org1.example.com
2017-06-12 21:01:37.334 EDT [bccsp] GetDefault -> WARN 001 Before using BCCSP, please
˓→call InitFactories(). Falling back to bootBCCSP.
...
/Users/xxx/dev/fabric-samples/bin/configtxgen
##########################################################
######### Generating Orderer Genesis block ##############
##########################################################
2017-06-12 21:01:37.558 EDT [common/configtx/tool] main -> INFO 001 Loading
˓→configuration
2017-06-12 21:01:37.562 EDT [msp] getMspConfig -> INFO 002 intermediate certs folder
˓→not found at [/Users/xxx/dev/byfn/crypto-config/ordererOrganizations/example.com/
...
2017-06-12 21:01:37.588 EDT [common/configtx/tool] doOutputBlock -> INFO 00b
˓→Generating genesis block
#################################################################
### Generating channel configuration transaction 'channel.tx' ###
#################################################################
2017-06-12 21:01:37.634 EDT [common/configtx/tool] main -> INFO 001 Loading
˓→configuration
#################################################################
####### Generating anchor peer update for Org1MSP ##########
#################################################################
2017-06-12 21:01:37.674 EDT [common/configtx/tool] main -> INFO 001 Loading
˓→configuration
#################################################################
####### Generating anchor peer update for Org2MSP ##########
#################################################################
2017-06-12 21:01:37.700 EDT [common/configtx/tool] main -> INFO 001 Loading
˓→configuration
This first step generates all of the certificates and keys for our various network entities, the genesis block used
to bootstrap the ordering service, and a collection of configuration transactions required to configure a Channel.
Next, you can bring the network up with one of the following commands:
./byfn.sh up
The above command will compile Golang chaincode images and spin up the corresponding containers. Go is the
default chaincode language, however there is also support for Node.js and Java chaincode. If you’d like to run through
this tutorial with node chaincode, pass the following command instead:
./byfn.sh up -l node
Note: For more information on the Node.js shim, please refer to its documentation.
Note: For more information on the Java shim, please refer to its documentation.
o make the sample run with Java chaincode, you have to specify -l java as follows:
./byfn.sh up -l java
Note: Do not run both of these commands. Only one language can be tried unless you bring down and recreate the
network between.
Once again, you will be prompted as to whether you wish to continue or abort. Respond with a y or hit the return key:
The logs will continue from there. This will launch all of the containers, and then drive a complete end-to-end
application scenario. Upon successful completion, it should report the following in your terminal window:
Query Result: 90
2017-05-16 17:08:15.158 UTC [main] main -> INFO 008 Exiting.....
===================== Query successful on peer1.org2 on channel 'mychannel'
˓→=====================
_____ _ _ ____
| ____| | \ | | | _ \
| _| | \| | | | | |
| |___ | |\ | | |_| |
|_____| |_| \_| |____/
You can scroll through these logs to see the various transactions. If you don’t get this result, then jump down to the
Troubleshooting section and let’s see whether we can help you discover what went wrong.
Finally, let’s bring it all down so we can explore the network setup one step at a time. The following will kill your
containers, remove the crypto material and four artifacts, and delete the chaincode images from your Docker Registry:
./byfn.sh down
Once again, you will be prompted to continue, respond with a y or hit the return key:
If you’d like to learn more about the underlying tooling and bootstrap mechanics, continue reading. In these next
sections we’ll walk through the various steps and requirements to build a fully-functional Hyperledger Fabric network.
Note: The manual steps outlined below assume that the FABRIC_LOGGING_SPEC in the cli container is set to
DEBUG. You can set this by modifying the docker-compose-cli.yaml file in the first-network directory.
e.g.
cli:
container_name: cli
image: hyperledger/fabric-tools:$IMAGE_TAG
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- FABRIC_LOGGING_SPEC=DEBUG
#- FABRIC_LOGGING_SPEC=INFO
We will use the cryptogen tool to generate the cryptographic material (x509 certs and signing keys) for our various
network entities. These certificates are representative of identities, and they allow for sign/verify authentication to take
place as our entities communicate and transact.
Cryptogen consumes a file - crypto-config.yaml - that contains the network topology and allows us to generate
a set of certificates and keys for both the Organizations and the components that belong to those Organizations. Each
Organization is provisioned a unique root certificate (ca-cert) that binds specific components (peers and orderers)
to that Org. By assigning each Organization a unique CA certificate, we are mimicking a typical network where a
participating Member would use its own Certificate Authority. Transactions and communications within Hyperledger
Fabric are signed by an entity’s private key (keystore), and then verified by means of a public key (signcerts).
You will notice a count variable within this file. We use this to specify the number of peers per Organization; in our
case there are two peers per Org. We won’t delve into the minutiae of x.509 certificates and public key infrastructure
right now. If you’re interested, you can peruse these topics on your own time.
Before running the tool, let’s take a quick look at a snippet from the crypto-config.yaml. Pay specific attention
to the “Name”, “Domain” and “Specs” parameters under the OrdererOrgs header:
OrdererOrgs:
#---------------------------------------------------------
# Orderer
# --------------------------------------------------------
- Name: Orderer
Domain: example.com
CA:
Country: US
Province: California
Locality: San Francisco
# OrganizationalUnit: Hyperledger Fabric
# StreetAddress: address for org # default nil
# PostalCode: postalCode for org # default nil
(continues on next page)
The naming convention for a network entity is as follows - “{{.Hostname}}.{{.Domain}}”. So using our ordering
node as a reference point, we are left with an ordering node named - orderer.example.com that is tied to an
MSP ID of Orderer. This file contains extensive documentation on the definitions and syntax. You can also refer to
the Membership Service Providers (MSP) documentation for a deeper dive on MSP.
After we run the cryptogen tool, the generated certificates and keys will be saved to a folder titled
crypto-config.
Configtxgen consumes a file - configtx.yaml - that contains the definitions for the sample network. There are
three members - one Orderer Org (OrdererOrg) and two Peer Orgs (Org1 & Org2) each managing and maintaining
two peer nodes. This file also specifies a consortium - SampleConsortium - consisting of our two Peer Orgs. Pay
specific attention to the “Profiles” section at the top of this file. You will notice that we have two unique headers. One
for the orderer genesis block - TwoOrgsOrdererGenesis - and one for our channel - TwoOrgsChannel.
These headers are important, as we will pass them in as arguments when we create our artifacts.
Note: Notice that our SampleConsortium is defined in the system-level profile and then referenced by our
channel-level profile. Channels exist within the purview of a consortium, and all consortia must be defined in the
scope of the network at large.
This file also contains two additional specifications that are worth noting. Firstly, we specify the anchor peers for each
Peer Org (peer0.org1.example.com & peer0.org2.example.com). Secondly, we point to the location
of the MSP directory for each member, in turn allowing us to store the root certificates for each Org in the orderer
genesis block. This is a critical concept. Now any network entity communicating with the ordering service can have
its digital signature verified.
You can manually generate the certificates/keys and the various configuration artifacts using the configtxgen and
cryptogen commands. Alternately, you could try to adapt the byfn.sh script to accomplish your objectives.
You can refer to the generateCerts function in the byfn.sh script for the commands necessary to generate the
certificates that will be used for your network configuration as defined in the crypto-config.yaml file. However,
for the sake of convenience, we will also provide a reference here.
First let’s run the cryptogen tool. Our binary is in the bin directory, so we need to provide the relative path to
where the tool resides.
org1.example.com
org2.example.com
The certs and keys (i.e. the MSP material) will be output into a directory - crypto-config - at the root of the
first-network directory.
Next, we need to tell the configtxgen tool where to look for the configtx.yaml file that it needs to ingest. We
will tell it look in our present working directory:
export FABRIC_CFG_PATH=$PWD
Then, we’ll invoke the configtxgen tool to create the orderer genesis block:
Note: The orderer genesis block and the subsequent artifacts we are about to create will be output into the
channel-artifacts directory at the root of this project. The channelID in the above command is the name
of the system channel.
Next, we need to create the channel transaction artifact. Be sure to replace $CHANNEL_NAME or set CHANNEL_NAME
as an environment variable that can be used throughout these instructions:
# The channel.tx artifact contains the definitions for our sample channel
Next, we will define the anchor peer for Org1 on the channel that we are constructing. Again, be sure to replace
$CHANNEL_NAME or set the environment variable for the following commands. The terminal output will mimic that
of the channel transaction artifact:
Now, we will define the anchor peer for Org2 on the same channel:
Note: If you ran the byfn.sh example above previously, be sure that you have brought down the test network before
you proceed (see Bring Down the Network).
We will leverage a script to spin up our network. The docker-compose file references the images that we have previ-
ously downloaded, and bootstraps the orderer with our previously generated genesis.block.
We want to go through the commands manually in order to expose the syntax and functionality of each call.
First let’s start our network:
docker-compose -f docker-compose-cli.yaml up -d
If you want to see the realtime logs for your network, then do not supply the -d flag. If you let the logs stream, then
you will need to open a second terminal to execute the CLI calls.
Environment variables
For the following CLI commands against peer0.org1.example.com to work, we need to preface our commands
with the four environment variables given below. These variables for peer0.org1.example.com are baked into
the CLI container, therefore we can operate without passing them. HOWEVER, if you want to send calls to other
peers or the orderer, then you can provide these values accordingly by editing the docker-compose-base.yaml
before starting the container. Modify the following four environment variables to use a different peer and org.
CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→peerOrganizations/org1.example.com/users/[email protected]/msp
CORE_PEER_ADDRESS=peer0.org1.example.com:7051
CORE_PEER_LOCALMSPID="Org1MSP"
CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
Recall that we created the channel configuration transaction using the configtxgen tool in the Create a Channel
Configuration Transaction section, above. You can repeat that process to create additional channel configuration
transactions, using the same or different profiles in the configtx.yaml that you pass to the configtxgen tool.
Then you can repeat the process defined in this section to establish those other channels in your network.
We will enter the CLI container using the docker exec command:
root@0d78bb69300d:/opt/gopath/src/github.com/hyperledger/fabric/peer#
If you do not want to run the CLI commands against the default peer peer0.org1.example.com, replace the
values of peer0 or org1 in the four environment variables and run the commands:
export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/
˓→crypto/peerOrganizations/org1.example.com/users/[email protected]/msp
export CORE_PEER_ADDRESS=peer0.org1.example.com:7051
export CORE_PEER_LOCALMSPID="Org1MSP"
export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/
˓→crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
Next, we are going to pass in the generated channel configuration transaction artifact that we created in the Create a
Channel Configuration Transaction section (we called it channel.tx) to the orderer as part of the create channel
request.
We specify our channel name with the -c flag and our channel configuration transaction with the -f flag. In this case
it is channel.tx, however you can mount your own configuration transaction with a different name. Once again we
will set the CHANNEL_NAME environment variable within our CLI container so that we don’t have to explicitly pass
this argument. Channel names must be all lower case, less than 250 characters long and match the regular expression
[a-z][a-z0-9.-]*.
export CHANNEL_NAME=mychannel
# the channel.tx file is mounted in the channel-artifacts directory within your CLI
˓→container
˓→peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/
˓→tlscacerts/tlsca.example.com-cert.pem
Note: Notice the --cafile that we pass as part of this command. It is the local path to the orderer’s root cert,
allowing us to verify the TLS handshake.
This command returns a genesis block - <channel-ID.block> - which we will use to join the channel. It contains
the configuration information specified in channel.tx If you have not made any modifications to the default channel
name, then the command will return you a proto titled mychannel.block.
Note: You will remain in the CLI container for the remainder of these manual commands. You must also remember to
preface all commands with the corresponding environment variables when targeting a peer other than peer0.org1.
example.com.
You can make other peers join the channel as necessary by making appropriate changes in the four environment
variables we used in the Environment variables section, above.
Rather than join every peer, we will simply join peer0.org2.example.com so that we can properly update the
anchor peer definitions in our channel. Since we are overriding the default environment variables baked into the CLI
container, this full command will be the following:
CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→peerOrganizations/org2.example.com/users/[email protected]/msp CORE_PEER_
˓→ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt peer
Alternatively, you could choose to set these environment variables individually rather than passing in the entire string.
Once they’ve been set, you simply need to issue the peer channel join command again and the CLI container
will act on behalf of peer0.org2.example.com.
The following commands are channel updates and they will propagate to the definition of the channel. In essence, we
adding additional configuration information on top of the channel’s genesis block. Note that we are not modifying the
genesis block, but simply adding deltas into the chain that will define the anchor peers.
Update the channel definition to define the anchor peer for Org1 as peer0.org1.example.com:
˓→fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/
˓→msp/tlscacerts/tlsca.example.com-cert.pem
Now update the channel definition to define the anchor peer for Org2 as peer0.org2.example.com. Identically
to the peer channel join command for the Org2 peer, we will need to preface this call with the appropriate
environment variables.
CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→peerOrganizations/org2.example.com/users/[email protected]/msp CORE_PEER_
˓→ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt peer
˓→crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/
˓→tlsca.example.com-cert.pem
Note: We will utilize a simple existing chaincode. To learn how to write your own chaincode, see the Chaincode for
Developers tutorial.
Applications interact with the blockchain ledger through chaincode. As such we need to install the chaincode on
every peer that will execute and endorse our transactions, and then instantiate the chaincode on the channel.
First, install the sample Go, Node.js or Java chaincode onto the peer0 node in Org1. These commands place the
specified source code flavor onto our peer’s filesystem.
Note: You can only install one version of the source code per chaincode name and version. The source code exists on
the peer’s file system in the context of chaincode name and version; it is language agnostic. Similarly the instantiated
chaincode container will be reflective of whichever language has been installed on the peer.
Golang
# this installs the Go chaincode. For go chaincode -p takes the relative path from
˓→$GOPATH/src
Node.js
Java
When we instantiate the chaincode on the channel, the endorsement policy will be set to require endorsements from a
peer in both Org1 and Org2. Therefore, we also need to install the chaincode on a peer in Org2.
Modify the following four environment variables to issue the install command against peer0 in Org2:
CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→peerOrganizations/org2.example.com/users/[email protected]/msp
CORE_PEER_ADDRESS=peer0.org2.example.com:7051
CORE_PEER_LOCALMSPID="Org2MSP"
CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
Now install the sample Go, Node.js or Java chaincode onto a peer0 in Org2. These commands place the specified
source code flavor onto our peer’s filesystem.
Golang
# this installs the Go chaincode. For go chaincode -p takes the relative path from
˓→$GOPATH/src
Node.js
Java
Next, instantiate the chaincode on the channel. This will initialize the chaincode on the channel, set the endorsement
policy for the chaincode, and launch a chaincode container for the targeted peer. Take note of the -P argument. This is
our policy where we specify the required level of endorsement for a transaction against this chaincode to be validated.
In the command below you’ll notice that we specify our policy as -P "AND ('Org1MSP.peer','Org2MSP.
peer')". This means that we need “endorsement” from a peer belonging to Org1 AND Org2 (i.e. two endorsement).
If we changed the syntax to OR then we would need only one endorsement.
Golang
# be sure to replace the $CHANNEL_NAME environment variable if you have not exported
˓→it
# if you did not install your chaincode with a name of mycc, then modify that
˓→argument as well
˓→orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n
˓→'Org2MSP.peer')"
Node.js
Note: The instantiation of the Node.js chaincode will take roughly a minute. The command is not hanging; rather it
is installing the fabric-shim layer as the image is being compiled.
# be sure to replace the $CHANNEL_NAME environment variable if you have not exported
˓→it
# if you did not install your chaincode with a name of mycc, then modify that
˓→argument as well
# notice that we must pass the -l flag after the chaincode name to identify the
˓→language
˓→orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n
˓→peer','Org2MSP.peer')"
Java
Note: Please note, Java chaincode instantiation might take time as it compiles chaincode and downloads docker
container with java environment.
˓→orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n
˓→peer','Org2MSP.peer')"
See the endorsement policies documentation for more details on policy implementation.
If you want additional peers to interact with ledger, then you will need to join them to the channel, and install the same
name, version and language of the chaincode source onto the appropriate peer’s filesystem. A chaincode container
will be launched for each peer as soon as they try to interact with that specific chaincode. Again, be cognizant of the
fact that the Node.js images will be slower to compile.
Once the chaincode has been instantiated on the channel, we can forgo the l flag. We need only pass in the channel
identifier and name of the chaincode.
Query
Let’s query for the value of a to make sure the chaincode was properly instantiated and the state DB was populated.
The syntax for query is as follows:
Invoke
Now let’s move 10 from a to b. This transaction will cut a new block and update the state DB. The syntax for invoke
is as follows:
˓→orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n
˓→github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/
˓→tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{
˓→"Args":["invoke","a","b","10"]}'
Query
Let’s confirm that our previous invocation executed properly. We initialized the key a with a value of 100 and just
removed 10 with our previous invocation. Therefore, a query against a should return 90. The syntax for query is as
follows.
Query Result: 90
Feel free to start over and manipulate the key value pairs and subsequent invocations.
Install
Now we will install the chaincode on a third peer, peer1 in Org2. Modify the following four environment variables to
issue the install command against peer1 in Org2:
CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→peerOrganizations/org2.example.com/users/[email protected]/msp
CORE_PEER_ADDRESS=peer1.org2.example.com:7051
CORE_PEER_LOCALMSPID="Org2MSP"
CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt
Now install the sample Go, Node.js or Java chaincode onto peer1 in Org2. These commands place the specified source
code flavor onto our peer’s filesystem.
Golang
# this installs the Go chaincode. For go chaincode -p takes the relative path from
˓→$GOPATH/src
Node.js
Java
Query
Let’s confirm that we can issue the query to Peer1 in Org2. We initialized the key a with a value of 100 and just
removed 10 with our previous invocation. Therefore, a query against a should still return 90.
peer1 in Org2 must first join the channel before it can respond to queries. The channel can be joined by issuing the
following command:
CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→peerOrganizations/org2.example.com/users/[email protected]/msp CORE_PEER_
˓→ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt peer
After the join command returns, the query can be issued. The syntax for query is as follows.
Query Result: 90
Feel free to start over and manipulate the key value pairs and subsequent invocations.
Note: These steps describe the scenario in which script.sh is run by ‘./byfn.sh up’. Clean your network with
./byfn.sh down and ensure this command is active. Then use the same docker-compose prompt to launch your
network again
• A script - script.sh - is baked inside the CLI container. The script drives the createChannel command
against the supplied channel name and uses the channel.tx file for channel configuration.
• The output of createChannel is a genesis block - <your_channel_name>.block - which gets stored
on the peers’ file systems and contains the channel configuration specified from channel.tx.
• The joinChannel command is exercised for all four peers, which takes as input the previously generated
genesis block. This command instructs the peers to join <your_channel_name> and create a chain starting
with <your_channel_name>.block.
• Now we have a channel consisting of four peers, and two organizations. This is our TwoOrgsChannel profile.
• peer0.org1.example.com and peer1.org1.example.com belong to Org1; peer0.org2.
example.com and peer1.org2.example.com belong to Org2
• These relationships are defined through the crypto-config.yaml and the MSP path is specified in our
docker compose.
• The anchor peers for Org1MSP (peer0.org1.example.com) and Org2MSP (peer0.org2.example.
com) are then updated. We do this by passing the Org1MSPanchors.tx and Org2MSPanchors.tx
artifacts to the ordering service along with the name of our channel.
• A chaincode - chaincode_example02 - is installed on peer0.org1.example.com and peer0.org2.
example.com
• The chaincode is then “instantiated” on mychannel. Instantiation adds the chaincode to the channel, starts
the container for the target peer, and initializes the key value pairs associated with the chaincode. The initial
values for this example are [“a”,”100” “b”,”200”]. This “instantiation” results in a container by the name of
dev-peer0.org2.example.com-mycc-1.0 starting.
• The instantiation also passes in an argument for the endorsement policy. The policy is defined as -P "AND
('Org1MSP.peer','Org2MSP.peer')", meaning that any transaction must be endorsed by a peer tied
to Org1 and Org2.
• A query against the value of “a” is issued to peer0.org2.example.com. A container for Org2 peer0 by the
name of dev-peer0.org2.example.com-mycc-1.0 was started when the chaincode was instantiated.
The result of the query is returned. No write operations have occurred, so a query against “a” will still return a
value of “100”.
• An invoke is sent to peer0.org1.example.com and peer0.org2.example.com to move “10” from
“a” to “b”
• A query is sent to peer0.org2.example.com for the value of “a”. A value of 90 is returned, correctly
reflecting the previous transaction during which the value for key “a” was modified by 10.
• The chaincode - chaincode_example02 - is installed on peer1.org2.example.com
• A query is sent to peer1.org2.example.com for the value of “a”. This starts a third chaincode con-
tainer by the name of dev-peer1.org2.example.com-mycc-1.0. A value of 90 is returned, correctly
reflecting the previous transaction during which the value for key “a” was modified by 10.
Chaincode MUST be installed on a peer in order for it to successfully perform read/write operations against the ledger.
Furthermore, a chaincode container is not started for a peer until an init or traditional transaction - read/write - is
performed against that chaincode (e.g. query for the value of “a”). The transaction causes the container to start. Also,
all peers in a channel maintain an exact copy of the ledger which comprises the blockchain to store the immutable,
sequenced record in blocks, as well as a state database to maintain a snapshot of the current state. This includes
those peers that do not have chaincode installed on them (like peer1.org1.example.com in the above example)
. Finally, the chaincode is accessible after it is installed (like peer1.org2.example.com in the above example)
because it has already been instantiated.
2017-05-16 17:08:01.366 UTC [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
2017-05-16 17:08:01.366 UTC [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining
˓→default signing identity
2017-05-16 17:08:01.366 UTC [msp/identity] Sign -> DEBU 006 Sign: plaintext:
˓→0AB1070A6708031A0C08F1E3ECC80510...6D7963631A0A0A0571756572790A0161
2017-05-16 17:08:01.367 UTC [msp/identity] Sign -> DEBU 007 Sign: digest:
˓→E61DB37F4E8B0D32C9FE10E3936BA9B8CD278FAA1F3320B08712164248285C54
Query Result: 90
2017-05-16 17:08:15.158 UTC [main] main -> INFO 008 Exiting.....
===================== Query successful on peer1.org2 on channel 'mychannel'
˓→=====================
_____ _ _ ____
| ____| | \ | | | _ \
| _| | \| | | | | |
| |___ | |\ | | |_| |
|_____| |_| \_| |____/
You can scroll through these logs to see the various transactions.
Inspect the individual chaincode containers to see the separate transactions executed against each container. Here is
the combined output from each container:
The BYFN sample offers us two flavors of Docker Compose files, both of which are extended from the
docker-compose-base.yaml (located in the base folder). Our first flavor, docker-compose-cli.yaml,
provides us with a CLI container, along with an orderer, four peers. We use this file for the entirety of the instructions
on this page.
Note: the remainder of this section covers a docker-compose file designed for the SDK. Refer to the Node SDK repo
for details on running these tests.
The second flavor, docker-compose-e2e.yaml, is constructed to run end-to-end tests using the Node.js SDK.
Aside from functioning with the SDK, its primary differentiation is that there are containers for the fabric-ca servers.
As a result, we are able to send REST calls to the organizational CAs for user registration and enrollment.
If you want to use the docker-compose-e2e.yaml without first running the byfn.sh script, then we will need
to make four slight modifications. We need to point to the private keys for our Organization’s CA’s. You can locate
these values in your crypto-config folder. For example, to locate the private key for Org1 we would follow this path
- crypto-config/peerOrganizations/org1.example.com/ca/. The private key is a long hash value
followed by _sk. The path for Org2 would be - crypto-config/peerOrganizations/org2.example.
com/ca/.
In the docker-compose-e2e.yaml update the FABRIC_CA_SERVER_TLS_KEYFILE variable for ca0 and
ca1. You also need to edit the path that is provided in the command to start the ca server. You are providing the same
private key twice for each CA container.
The state database can be switched from the default (goleveldb) to CouchDB. The same chaincode functions are
available with CouchDB, however, there is the added ability to perform rich and complex queries against the state
database data content contingent upon the chaincode data being modeled as JSON.
To use CouchDB instead of the default database (goleveldb), follow the same procedures outlined earlier for generating
the artifacts, except when starting the network pass docker-compose-couch.yaml as well:
Note: If you choose to implement mapping of the fabric-couchdb container port to a host port, please make sure you
are aware of the security implications. Mapping of the port in a development environment makes the CouchDB REST
API available, and allows the visualization of the database via the CouchDB web interface (Fauxton). Production
environments would likely refrain from implementing port mapping in order to restrict outside access to the CouchDB
containers.
You can use chaincode_example02 chaincode against the CouchDB state database using the steps outlined above,
however in order to exercise the CouchDB query capabilities you will need to use a chaincode that has data modeled
as JSON, (e.g. marbles02). You can locate the marbles02 chaincode in the fabric/examples/chaincode/go
directory.
We will follow the same process to create and join the channel as outlined in the Create & Join Channel section above.
Once you have joined your peer(s) to the channel, use the following steps to interact with the marbles02 chaincode:
• Install and instantiate the chaincode on peer0.org1.example.com:
# be sure to modify the $CHANNEL_NAME variable accordingly for the instantiate command
˓→orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n
˓→orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n
˓→marbles -c '{"Args":["initMarble","marble1","blue","35","tom"]}'
˓→orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n
˓→marbles -c '{"Args":["initMarble","marble2","red","50","tom"]}'
˓→orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n
˓→marbles -c '{"Args":["initMarble","marble3","blue","70","tom"]}'
˓→orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n
˓→marbles -c '{"Args":["transferMarble","marble2","jerry"]}'
˓→orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n
˓→marbles -c '{"Args":["transferMarblesBasedOnColor","blue","jerry"]}'
˓→orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n
˓→marbles -c '{"Args":["delete","marble1"]}'
• If you chose to map the CouchDB ports in docker-compose, you can now view the state database through the
CouchDB web interface (Fauxton) by opening a browser and navigating to the following URL:
http://localhost:5984/_utils
You should see a database named mychannel (or your unique channel name) and the documents inside it.
Note: For the below commands, be sure to update the $CHANNEL_NAME variable appropriately.
You can run regular queries from the CLI (e.g. reading marble2):
˓→"docType":"marble","name":"marble1","color":"blue","size":35,"owner":"tom"}},{"TxId
˓→":"755d55c281889eaeebf405586f9e25d71d36eb3d35420af833a20a2f53a3eefd", "Value":{
˓→"docType":"marble","name":"marble1","color":"blue","size":35,"owner":"jerry"}},{
˓→"TxId":"819451032d813dde6247f85e56a89262555e04f14788ee33e28b232eef36d98f", "Value":}
˓→]
You can also perform rich queries on the data content, such as querying marble fields by owner jerry:
˓→"docType":"marble","name":"marble3","owner":"jerry","size":70}}]
CouchDB is a kind of NoSQL solution. It is a document-oriented database where document fields are stored as
key-value maps. Fields can be either a simple key-value pair, list, or map. In addition to keyed/composite-key/key-
range queries which are supported by LevelDB, CouchDB also supports full data rich queries capability, such as non-
key queries against the whole blockchain data, since its data content is stored in JSON format and fully queryable.
Therefore, CouchDB can meet chaincode, auditing, reporting requirements for many use cases that not supported by
LevelDB.
CouchDB can also enhance the security for compliance and data protection in the blockchain. As it is able to im-
plement field-level security through the filtering and masking of individual attributes within a transaction, and only
authorizing the read-only permission if needed.
In addition, CouchDB falls into the AP-type (Availability and Partition Tolerance) of the CAP theorem. It uses a
master-master replication model with Eventual Consistency. More information can be found on the Eventual
Consistency page of the CouchDB documentation. However, under each fabric peer, there is no database replicas,
writes to database are guaranteed consistent and durable (not Eventual Consistency).
CouchDB is the first external pluggable state database for Fabric, and there could and should be other external database
options. For example, IBM enables the relational database for its blockchain. And the CP-type (Consistency and
Partition Tolerance) databases may also in need, so as to enable data consistency without application level guarantee.
If data persistence is desired on the peer container or the CouchDB container, one option is to mount a directory in the
docker-host into a relevant directory in the container. For example, you may add the following two lines in the peer
container specification in the docker-compose-base.yaml file:
volumes:
- /var/hyperledger/peer0:/var/hyperledger/production
For the CouchDB container, you may add the following two lines in the CouchDB container specification:
volumes:
- /var/hyperledger/couchdb0:/opt/couchdb/data
7.3.11 Troubleshooting
• Always start your network fresh. Use the following command to remove artifacts, crypto, containers and chain-
code images:
./byfn.sh down
Note: You will see errors if you do not remove old containers and images.
• If you see Docker errors, first check your docker version (Prerequisites), and then try restarting your Docker
process. Problems with Docker are oftentimes not immediately recognizable. For example, you may see errors
resulting from an inability to access crypto material mounted within a container.
If they persist remove your images and start from scratch:
• If you see errors on your create, instantiate, invoke or query commands, make sure you have properly updated
the channel name and chaincode name. There are placeholder values in the supplied sample commands.
• If you see the below error:
Error: Error endorsing chaincode: rpc error: code = 2 desc = Error installing
˓→chaincode code mycc:1.0(chaincode /var/hyperledger/production/chaincodes/mycc.1.
˓→0 exits)
Error connecting: rpc error: code = 14 desc = grpc: RPC failed fast due to
˓→transport failure
Error: rpc error: code = 14 desc = grpc: RPC failed fast due to transport failure
Make sure you are running your network against the “1.0.0” images that have been retagged as “latest”.
• If you see the below error:
Then you did not set the FABRIC_CFG_PATH environment variable properly. The configtxgen tool needs this
variable in order to locate the configtx.yaml. Go back and execute an export FABRIC_CFG_PATH=$PWD,
then recreate your channel artifacts.
• To cleanup the network, use the down option:
./byfn.sh down
• If you see an error stating that you still have “active endpoints”, then prune your Docker networks. This will
wipe your previous networks and start you with a fresh environment:
WARNING! This will remove all networks not used by at least one container.
Are you sure you want to continue? [y/N]
Select y.
• If you see an error similar to the following:
Ensure that the file in question (script.sh in this example) is encoded in the Unix format. This was most likely
caused by not setting core.autocrlf to false in your Git configuration (see Windows extras). There are
several ways of fixing this. If you have access to the vim editor for instance, open the file:
vim ./fabric-samples/first-network/scripts/script.sh
:set ff=unix
Note: If you continue to see errors, share your logs on the fabric-questions channel on Hyperledger Rocket Chat or
on StackOverflow.
Note: Ensure that you have downloaded the appropriate images and binaries as outlined in Install Samples, Binaries
and Docker Images and Prerequisites that conform to the version of this documentation (which can be found at the
bottom of the table of contents to the left). In particular, your version of the fabric-samples folder must include
the eyfn.sh (“Extending Your First Network”) script and its related scripts.
This tutorial serves as an extension to the Building Your First Network (BYFN) tutorial, and will demonstrate the
addition of a new organization – Org3 – to the application channel (mychannel) autogenerated by BYFN. It assumes
a strong understanding of BYFN, including the usage and functionality of the aforementioned utilities.
While we will focus solely on the integration of a new organization here, the same approach can be adopted when
performing other channel configuration updates (updating modification policies or altering batch size, for example).
To learn more about the process and possibilities of channel config updates in general, check out Updating a Channel
Configuration). It’s also worth noting that channel configuration updates like the one demonstrated here will usually
be the responsibility of an organization admin (rather than a chaincode or application developer).
Note: Make sure the automated byfn.sh script runs without error on your machine before continuing. If you have
exported your binaries and the related tools (cryptogen, configtxgen, etc) into your PATH variable, you’ll be
able to modify the commands accordingly without passing the fully qualified path.
We will be operating from the root of the first-network subdirectory within your local clone of
fabric-samples. Change into that directory now. You will also want to open a few extra terminals for ease
of use.
First, use the byfn.sh script to tidy up. This command will kill any active or stale docker containers and remove
previously generated artifacts. It is by no means necessary to bring down a Fabric network in order to perform
channel configuration update tasks. However, for the sake of this tutorial, we want to operate from a known initial
state. Therefore let’s run the following command to clean up any previous environments:
./byfn.sh down
./byfn.sh generate
And launch the network making use of the scripted execution within the CLI container:
./byfn.sh up
Now that you have a clean version of BYFN running on your machine, you have two different paths you can pursue.
First, we offer a fully commented script that will carry out a config transaction update to bring Org3 into the network.
Also, we will show a “manual” version of the same process, showing each step and explaining what it accomplishes
(since we show you how to bring down your network before this manual process, you could also run the script and
then look at each step).
You should be in first-network. To use the script, simply issue the following:
./eyfn.sh up
The output here is well worth reading. You’ll see the Org3 crypto material being added, the config update being
created and signed, and then chaincode being installed to allow Org3 to execute ledger queries.
If everything goes well, you’ll get this message:
eyfn.sh can be used with the same Node.js chaincode and database options as byfn.sh by issuing the following
(instead of ./byfn.sh up):
And then:
For those who want to take a closer look at this process, the rest of the doc will show you each command for making
a channel update and what it does.
Note: The manual steps outlined below assume that the FABRIC_LOGGING_SPEC in the cli and Org3cli
containers is set to DEBUG.
For the cli container, you can set this by modifying the docker-compose-cli.yaml file in the
first-network directory. e.g.
cli:
container_name: cli
image: hyperledger/fabric-tools:$IMAGE_TAG
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
#- FABRIC_LOGGING_SPEC=INFO
- FABRIC_LOGGING_SPEC=DEBUG
For the Org3cli container, you can set this by modifying the docker-compose-org3.yaml file in the
first-network directory. e.g.
Org3cli:
container_name: Org3cli
image: hyperledger/fabric-tools:$IMAGE_TAG
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
#- FABRIC_LOGGING_SPEC=INFO
- FABRIC_LOGGING_SPEC=DEBUG
If you’ve used the eyfn.sh script, you’ll need to bring your network down. This can be done by issuing:
./eyfn.sh down
This will bring down the network, delete all the containers and undo what we’ve done to add Org3.
When the network is down, bring it back up again.
./byfn.sh generate
Then:
./byfn.sh up
This will bring your network back to the same state it was in before you executed the eyfn.sh script.
Now we’re ready to add Org3 manually. As a first step, we’ll need to generate Org3’s crypto material.
cd org3-artifacts
There are two yaml files of interest here: org3-crypto.yaml and configtx.yaml. First, generate the crypto
material for Org3:
This command reads in our new crypto yaml file – org3-crypto.yaml – and leverages cryptogen to generate
the keys and certificates for an Org3 CA as well as two peers bound to this new Org. As with the BYFN implementa-
tion, this crypto material is put into a newly generated crypto-config folder within the present working directory
(in our case, org3-artifacts).
Now use the configtxgen utility to print out the Org3-specific configuration material in JSON. We will preface the
command by telling the tool to look in the current directory for the configtx.yaml file that it needs to ingest.
The above command creates a JSON file – org3.json – and outputs it into the channel-artifacts subdirec-
tory at the root of first-network. This file contains the policy definitions for Org3, as well as three important
certificates presented in base 64 format: the admin user certificate (which will be needed to act as the admin of Org3
later on), a CA root cert, and a TLS root cert. In an upcoming step we will append this JSON file to the channel
configuration.
Our final piece of housekeeping is to port the Orderer Org’s MSP material into the Org3 crypto-config directory.
In particular, we are concerned with the Orderer’s TLS root cert, which will allow for secure communication between
Org3 entities and the network’s ordering node.
The update process makes use of the configuration translator tool – configtxlator. This tool provides a stateless
REST API independent of the SDK. Additionally it provides a CLI, to simplify configuration tasks in Fabric networks.
The tool allows for the easy conversion between different equivalent data representations/formats (in this case, between
protobufs and JSON). Additionally, the tool can compute a configuration update transaction based on the differences
between two channel configurations.
First, exec into the CLI container. Recall that this container has been mounted with the BYFN crypto-config
library, giving us access to the MSP material for the two original peer organizations and the Orderer Org. The boot-
strapped identity is the Org1 admin user, meaning that any steps where we want to act as Org2 will require the export
of MSP-specific environment variables.
export ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.
Note: If for any reason you need to restart the CLI container, you will also need to re-export the two environment
variables – ORDERER_CA and CHANNEL_NAME.
Now we have a CLI container with our two key environment variables – ORDERER_CA and CHANNEL_NAME ex-
ported. Let’s go fetch the most recent config block for the channel – mychannel.
The reason why we have to pull the latest version of the config is because channel config elements are versioned.
Versioning is important for several reasons. It prevents config changes from being repeated or replayed (for instance,
reverting to a channel config with old CRLs would represent a security risk). Also it helps ensure concurrency (if you
want to remove an Org from your channel, for example, after a new Org has been added, versioning will help prevent
you from removing both Orgs, instead of just the Org you want to remove).
This command saves the binary protobuf channel configuration block to config_block.pb. Note that the choice
of name and file extension is arbitrary. However, following a convention which identifies both the type of object being
represented and its encoding (protobuf or JSON) is recommended.
When you issued the peer channel fetch command, there was a decent amount of output in the terminal. The
last line in the logs is of interest:
2017-11-07 17:17:57.383 UTC [channelCmd] readBlock -> DEBU 011 Received block: 2
This is telling us that the most recent configuration block for mychannel is actually block 2, NOT the genesis block.
By default, the peer channel fetch config command returns the most recent configuration block for the
targeted channel, which in this case is the third block. This is because the BYFN script defined anchor peers for our
two organizations – Org1 and Org2 – in two separate channel update transactions.
As a result, we have the following configuration sequence:
• block 0: genesis block
• block 1: Org1 anchor peer update
• block 2: Org2 anchor peer update
Now we will make use of the configtxlator tool to decode this channel configuration block into JSON format
(which can be read and modified by humans). We also must strip away all of the headers, metadata, creator signatures,
and so on that are irrelevant to the change we want to make. We accomplish this by means of the jq tool:
This leaves us with a trimmed down JSON object – config.json, located in the fabric-samples folder inside
first-network – which will serve as the baseline for our config update.
Take a moment to open this file inside your text editor of choice (or in your browser). Even after you’re done with this
tutorial, it will be worth studying it as it reveals the underlying configuration structure and the other kind of channel
updates that can be made. We discuss them in more detail in Updating a Channel Configuration.
Note: The steps you’ve taken up to this point will be nearly identical no matter what kind of config update you’re try-
ing to make. We’ve chosen to add an org with this tutorial because it’s one of the most complex channel configuration
updates you can attempt.
We’ll use the jq tool once more to append the Org3 configuration definition – org3.json – to the channel’s appli-
cation groups field, and name the output – modified_config.json.
Now, within the CLI container we have two JSON files of interest – config.json and modified_config.
json. The initial file contains only Org1 and Org2 material, whereas “modified” file contains all three Orgs. At this
point it’s simply a matter of re-encoding these two JSON files and calculating the delta.
First, translate config.json back into a protobuf called config.pb:
Now use configtxlator to calculate the delta between these two config protobufs. This command will output a
new protobuf binary named org3_update.pb:
This new proto – org3_update.pb – contains the Org3 definitions and high level pointers to the Org1 and Org2
material. We are able to forgo the extensive MSP material and modification policy information for Org1 and Org2
because this data is already present within the channel’s genesis block. As such, we only need the delta between the
two configurations.
Before submitting the channel update, we need to perform a few final steps. First, let’s decode this object into editable
JSON format and call it org3_update.json:
Now, we have a decoded update file – org3_update.json – that we need to wrap in an envelope mes-
sage. This step will give us back the header field that we stripped away earlier. We’ll name this file
org3_update_in_envelope.json:
˓→envelope.json
Almost done!
We now have a protobuf binary – org3_update_in_envelope.pb – within our CLI container. However, we
need signatures from the requisite Admin users before the config can be written to the ledger. The modification policy
(mod_policy) for our channel Application group is set to the default of “MAJORITY”, which means that we need a
majority of existing org admins to sign it. Because we have only two orgs – Org1 and Org2 – and the majority of
two is two, we need both of them to sign. Without both signatures, the ordering service will reject the transaction for
failing to fulfill the policy.
First, let’s sign this update proto as the Org1 Admin. Remember that the CLI container is bootstrapped with the Org1
MSP material, so we simply need to issue the peer channel signconfigtx command:
The final step is to switch the CLI container’s identity to reflect the Org2 Admin user. We do this by exporting four
environment variables specific to the Org2 MSP.
Note: Switching between organizations to sign a config transaction (or to do anything else) is not reflective of a real-
world Fabric operation. A single container would never be mounted with an entire network’s crypto material. Rather,
the config update would need to be securely passed out-of-band to an Org2 Admin for inspection and approval.
export CORE_PEER_LOCALMSPID="Org2MSP"
export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/
˓→crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/
˓→crypto/peerOrganizations/org2.example.com/users/[email protected]/msp
export CORE_PEER_ADDRESS=peer0.org2.example.com:7051
Lastly, we will issue the peer channel update command. The Org2 Admin signature will be attached to this
call so there is no need to manually sign the protobuf a second time:
Note: The upcoming update call to the ordering service will undergo a series of systematic signature and policy
checks. As such you may find it useful to stream and inspect the ordering node’s logs. From another shell, issue a
docker logs -f orderer.example.com command to display them.
You should see a message digest indication similar to the following if your update has been submitted successfully:
2018-02-24 18:56:33.499 UTC [msp/identity] Sign -> DEBU 00f Sign: digest:
˓→3207B24E40DE2FAB87A2E42BC004FEAA1E6FDCA42977CB78C64F05A88E556ABA
2018-02-24 18:56:33.499 UTC [channelCmd] update -> INFO 010 Successfully submitted
˓→channel update
The successful channel update call returns a new block – block 5 – to all of the peers on the channel. If you remember,
blocks 0-2 are the initial channel configurations while blocks 3 and 4 are the instantiation and invocation of the mycc
chaincode. As such, block 5 serves as the most recent channel configuration with Org3 now defined on the channel.
Inspect the logs for peer0.org1.example.com:
Follow the demonstrated process to fetch and decode the new config block if you wish to inspect its contents.
Note: This section is included as a general reference for understanding the leader election settings when adding
organizations to a network after the initial channel configuration has completed. This sample defaults to dynamic
leader election, which is set for all peers in the network in peer-base.yaml.
Newly joining peers are bootstrapped with the genesis block, which does not contain information about the organi-
zation that is being added in the channel configuration update. Therefore new peers are not able to utilize gossip
as they cannot verify blocks forwarded by other peers from their own organization until they get the configuration
transaction which added the organization to the channel. Newly added peers must therefore have one of the following
configurations so that they receive blocks from the ordering service:
1. To utilize static leader mode, configure the peer to be an organization leader:
CORE_PEER_GOSSIP_USELEADERELECTION=false
CORE_PEER_GOSSIP_ORGLEADER=true
Note: This configuration must be the same for all new peers added to the channel.
2. To utilize dynamic leader election, configure the peer to use leader election:
CORE_PEER_GOSSIP_USELEADERELECTION=true
CORE_PEER_GOSSIP_ORGLEADER=false
Note: Because peers of the newly added organization won’t be able to form membership view, this option will
be similar to the static configuration, as each peer will start proclaiming itself to be a leader. However, once they get
updated with the configuration transaction that adds the organization to the channel, there will be only one active leader
for the organization. Therefore, it is recommended to leverage this option if you eventually want the organization’s
peers to utilize leader election.
At this point, the channel configuration has been updated to include our new organization – Org3 – meaning that
peers attached to it can now join mychannel.
First, let’s launch the containers for the Org3 peers and an Org3-specific CLI.
Open a new terminal and from first-network kick off the Org3 docker compose:
docker-compose -f docker-compose-org3.yaml up -d
This new compose file has been configured to bridge across our initial network, so the two peers and the CLI container
will be able to resolve with the existing peers and ordering node. With the three new containers now running, exec
into the Org3-specific CLI container:
Just as we did with the initial CLI container, export the two key environment variables: ORDERER_CA and
CHANNEL_NAME:
export ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.
Now let’s send a call to the ordering service asking for the genesis block of mychannel. The ordering service is able
to verify the Org3 signature attached to this call as a result of our successful channel update. If Org3 has not been
successfully appended to the channel config, the ordering service should reject this request.
Note: Again, you may find it useful to stream the ordering node’s logs to reveal the sign/verify logic and policy
checks.
Notice, that we are passing a 0 to indicate that we want the first block on the channel’s ledger (i.e. the genesis block).
If we simply passed the peer channel fetch config command, then we would have received block 5 – the
updated config with Org3 defined. However, we can’t begin our ledger with a downstream block – we must start with
block 0.
Issue the peer channel join command and pass in the genesis block – mychannel.block:
If you want to join the second peer for Org3, export the TLS and ADDRESS variables and reissue the peer channel
join command:
export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/
˓→crypto/peerOrganizations/org3.example.com/peers/peer1.org3.example.com/tls/ca.crt &&
˓→ export CORE_PEER_ADDRESS=peer1.org3.example.com:7051
The final piece of the puzzle is to increment the chaincode version and update the endorsement policy to include Org3.
Since we know that an upgrade is coming, we can forgo the futile exercise of installing version 1 of the chaincode.
We are solely concerned with the new version where Org3 will be part of the endorsement policy, therefore we’ll jump
directly to version 2 of the chaincode.
From the Org3 CLI:
Modify the environment variables accordingly and reissue the command if you want to install the chaincode on the
second peer of Org3. Note that a second installation is not mandated, as you only need to install chaincode on peers
that are going to serve as endorsers or otherwise interface with the ledger (i.e. query only). Peers will still run the
validation logic and serve as committers without a running chaincode container.
Now jump back to the original CLI container and install the new version on the Org1 and Org2 peers. We submitted
the channel update call with the Org2 admin identity, so the container is still acting on behalf of peer0.org2:
export CORE_PEER_LOCALMSPID="Org1MSP"
export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/
˓→crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/
˓→crypto/peerOrganizations/org1.example.com/users/[email protected]/msp
export CORE_PEER_ADDRESS=peer0.org1.example.com:7051
Now we’re ready to upgrade the chaincode. There have been no modifications to the underlying source code, we are
simply adding Org3 to the endorsement policy for a chaincode – mycc – on mychannel.
Note: Any identity satisfying the chaincode’s instantiation policy can issue the upgrade call. By default, these
identities are the channel Admins.
You can see in the above command that we are specifying our new version by means of the v flag. You can also see that
the endorsement policy has been modified to -P "OR ('Org1MSP.peer','Org2MSP.peer','Org3MSP.
peer')", reflecting the addition of Org3 to the policy. The final area of interest is our constructor request (specified
with the c flag).
As with an instantiate call, a chaincode upgrade requires usage of the init method. If your chaincode requires
arguments be passed to the init method, then you will need to do so here.
The upgrade call adds a new block – block 6 – to the channel’s ledger and allows for the Org3 peers to execute
transactions during the endorsement phase. Hop back to the Org3 CLI container and issue a query for the value of a.
This will take a bit of time because a chaincode image needs to be built for the targeted peer, and the container needs
to start:
We should see a response of Query Result: 80, accurately reflecting the update of this chaincode’s world state.
7.4.13 Conclusion
The channel configuration update process is indeed quite involved, but there is a logical method to the various steps.
The endgame is to form a delta transaction object represented in protobuf binary format and then acquire the requisite
number of admin signatures such that the channel configuration update transaction fulfills the channel’s modification
policy.
The configtxlator and jq tools, along with the ever-growing peer channel commands, provide us with the
functionality to accomplish this task.
Note: When we use the term “upgrade” in this documentation, we’re primarily referring to changing the version of a
component (for example, going from a v1.3 binary to a v1.4 binary). The term “update,” on the other hand, refers not
to versions but to configuration changes, such as updating a channel configuration or a deployment script. As there is
no data migration, technically speaking, in Fabric, we will not use the term “migration” or “migrate” here.
Note: Also, if your network is not yet at Fabric v1.3, follow the instructions for Upgrading Your Network to v1.3.
The instructions in this documentation only cover moving from v1.3 to v1.4, not from any other version to v1.4.
7.5.1 Overview
Because the Building Your First Network (BYFN) tutorial defaults to the “latest” binaries, if you have run it since the
release of v1.4, your machine will have v1.4 binaries and tools installed on it and you will not be able to upgrade them.
As a result, this tutorial will provide a network based on Hyperledger Fabric v1.3 binaries as well as the v1.4 binaries
you will be upgrading to.
At a high level, our upgrade tutorial will perform the following steps:
1. Backup the ledger and MSPs.
2. Upgrade the orderer binaries to Fabric v1.4.
3. Upgrade the peer binaries to Fabric v1.4.
Note: There are no new Capability Requirements in v1.4. As a result, we do not have to update any channel
configurations as part of an upgrade to v1.4.
This tutorial will demonstrate how to perform each of these steps individually with CLI commands. We will also
describe how the CLI tools image can be updated.
Note: Because BYFN uses a “SOLO” ordering service (one orderer), our script brings down the entire network.
However, in production environments, the orderers and peers can be upgraded simultaneously and on a rolling basis.
In other words, you can upgrade the binaries in any order without bringing down the network.
Because BYFN is not compatible with the following components, our script for upgrading BYFN will not cover them:
• Fabric CA
• Kafka
• CouchDB
• SDK
The process for upgrading these components — if necessary — will be covered in a section following the tutorial. We
will also show how to upgrade the Node chaincode shim.
From an operational perspective, it’s worth noting that the process for gathering logs has changed in v1.4,
from CORE_LOGGING_LEVEL (for the peer) and ORDERER_GENERAL_LOGLEVEL (for the orderer) to
FABRIC_LOGGING_SPEC (the new operations service). For more information, check out the Fabric release notes.
Prerequisites
If you haven’t already done so, ensure you have all of the dependencies on your machine as described in Prerequisites.
Before we can upgrade to v1.4, we must first provision a network running Fabric v1.3 images.
Just as in the BYFN tutorial, we will be operating from the first-network subdirectory within your local clone
of fabric-samples. Change into that directory now. You will also want to open a few extra terminals for ease of
use.
Clean up
We want to operate from a known state, so we will use the byfn.sh script to kill any active or stale docker containers
and remove any previously generated artifacts. Run:
./byfn.sh down
With a clean environment, launch our v1.3 BYFN network using these four commands:
./byfn.sh generate
Note: If you have locally built v1.3 images, they will be used by the example. If you get errors, please consider
cleaning up your locally built v1.3 images and running the example again. This will download v1.3 images from
docker hub.
Note: The instructions below pertain to whatever is the most recently published version of v1.4.x. Please substitute
1.4.x with the version identifier of the published release that you are testing. In other words, replace ‘1.4.x’ with
‘1.4.0’ if you are testing the first release.
Before completing the rest of the tutorial, it’s important to get the v1.4.x version of the samples, you can do this by
issuing:
We have a script that will upgrade all of the components in BYFN as well as enable any capabilities (note, no new
capabilities are required for v1.4). If you are running a production network, or are an administrator of some part of a
network, this script can serve as a template for performing your own upgrades.
Afterwards, we will walk you through the steps in the script and describe what each piece of code is doing in the
upgrade process.
If you want to upgrade the network manually, simply run ./byfn.sh down again and perform the steps up to —
but not including — ./byfn.sh upgrade -i 1.4.x. Then proceed to the next section.
Note: Many of the commands you’ll run in this section will not result in any output. In general, assume no output is
good output.
Orderer containers should be upgraded in a rolling fashion (one at a time). At a high level, the orderer upgrade process
goes as follows:
1. Stop the orderer.
2. Back up the orderer’s ledger and MSP.
3. Restart the orderer with the latest images.
4. Verify upgrade completion.
As a consequence of leveraging BYFN, we have a solo orderer setup, therefore, we will only perform this process
once. In a Kafka setup, however, this process will have to be repeated on each orderer.
Note: This tutorial uses a docker deployment. For native deployments, replace the file orderer with the one from
the release artifacts. Backup the orderer.yaml and replace it with the orderer.yaml file from the release
artifacts. Then port any modified variables from the backed up orderer.yaml to the new one. Utilizing a utility
like diff may be helpful.
export LEDGERS_BACKUP=./ledgers-backup
We have created a variable for a directory to put file backups into, and exported the IMAGE_TAG we’d like to move
to.
Once the orderer is down, you’ll want to backup its ledger and MSP:
mkdir -p $LEDGERS_BACKUP
In a production network this process would be repeated for each of the Kafka-based orderers in a rolling fashion.
Now download and restart the orderer with our new fabric image:
Because our sample uses a “solo” ordering service, there are no other orderers in the network that the restarted or-
derer must sync up to. However, in a production network leveraging Kafka, it will be a best practice to issue peer
channel fetch <blocknumber> after restarting the orderer to verify that it has caught up to the other orderers.
Next, let’s look at how to upgrade peer containers to Fabric v1.4. Peer containers should, like the orderers, be upgraded
in a rolling fashion (one at a time). As mentioned during the orderer upgrade, orderers and peers may be upgraded in
parallel, but for the purposes of this tutorial we’ve separated the processes out. At a high level, we will perform the
following steps:
1. Stop the peer.
2. Back up the peer’s ledger and MSP.
3. Remove chaincode containers and images.
4. Restart the peer with latest image.
5. Verify upgrade completion.
We have four peers running in our network. We will perform this process once for each peer, totaling four upgrades.
Note: Again, this tutorial utilizes a docker deployment. For native deployments, replace the file peer with the one
from the release artifacts. Backup your core.yaml and replace it with the one from the release artifacts. Port any
modified variables from the backed up core.yaml to the new one. Utilizing a utility like diff may be helpful.
Let’s bring down the first peer with the following command:
export PEER=peer0.org1.example.com
mkdir -p $LEDGERS_BACKUP
With the peer stopped and the ledger backed up, remove the peer chaincode containers:
Now we’ll re-launch the peer using the v1.4 image tag:
Note: Although, BYFN supports using CouchDB, we opted for a simpler implementation in this tutorial. If you are
using CouchDB, however, issue this command instead of the one above:
Note: You do not need to relaunch the chaincode container. When the peer gets a request for a chaincode, (invoke or
query), it first checks if it has a copy of that chaincode running. If so, it uses it. Otherwise, as in this case, the peer
launches the chaincode (rebuilding the image if required).
We’ve completed the upgrade for our first peer, but before we move on let’s check to ensure the upgrade has been
completed properly with a chaincode invoke.
Note: Before you attempt this, you may want to upgrade peers from enough organizations to satisfy your endorsement
policy. Although, this is only mandatory if you are updating your chaincode as part of the upgrade process. If you are
not updating your chaincode as part of the upgrade process, it is possible to get endorsements from peers running at
different Fabric versions.
Before we get into the CLI container and issue the invoke, make sure the CLI is updated to the most current version
by issuing:
Once you have the version of the CLI you want, get into the CLI container:
Now you’ll need to set two environment variables — the name of the channel and the name of the ORDERER_CA:
CH_NAME=mychannel
ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.
˓→example.com-cert.pem
˓→crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --
˓→com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.
Our query earlier revealed a to have a value of 90 and we have just removed 10 with our invoke. Therefore, a query
against a should reveal 80. Let’s see:
Query Result: 80
After verifying the peer was upgraded correctly, make sure to issue an exit to leave the container before continuing
to upgrade your peers. You can do this by repeating the process above with a different peer name exported.
export PEER=peer1.org1.example.com
export PEER=peer0.org2.example.com
export PEER=peer1.org2.example.com
Although this is the end of our update tutorial, there are other components that exist in production networks that are
not compatible with the BYFN sample. In this section, we’ll talk through the process of updating them.
Fabric CA container
To learn how to upgrade your Fabric CA server, click over to the CA documentation.
Note: Upgrade Fabric and Fabric CA before upgrading Node SDK clients. Fabric and Fabric CA are tested for
backwards compatibility with older SDK clients. While newer SDK clients often work with older Fabric and Fabric
CA releases, they may expose features that are not yet available in the older Fabric and Fabric CA releases, and are
not tested for full compatibility.
Use NPM to upgrade any Node.js client by executing these commands in the root directory of your application:
These commands install the new version of both the Fabric client and Fabric-CA client and write the new versions
package.json.
It is not required, but it is recommended that the Kafka cluster be upgraded and kept up to date along with the rest of
Fabric. Newer versions of Kafka support older protocol versions, so you may upgrade Kafka before or after the rest of
Fabric.
If you followed the Upgrading Your Network to v1.3 tutorial, your Kafka cluster should be at v1.0.0. If it isn’t, refer
to the official Apache Kafka documentation on upgrading Kafka from previous versions to upgrade the Kafka cluster
brokers.
Upgrading Zookeeper
An Apache Kafka cluster requires an Apache Zookeeper cluster. The Zookeeper API has been stable for a long time
and, as such, almost any version of Zookeeper is tolerated by Kafka. Refer to the Apache Kafka upgrade documenta-
tion in case there is a specific requirement to upgrade to a specific version of Zookeeper. If you would like to upgrade
your Zookeeper cluster, some information on upgrading Zookeeper cluster can be found in the Zookeeper FAQ.
Upgrading CouchDB
If you are using CouchDB as state database, you should upgrade the peer’s CouchDB at the same time the peer is
being upgraded. CouchDB v2.2.0 has been tested with Fabric v1.4.
To upgrade CouchDB:
1. Stop CouchDB.
2. Backup CouchDB data directory.
3. Install CouchDB v2.2.0 binaries or update deployment scripts to use a new Docker image (CouchDB v2.2.0
pre-configured Docker image is provided alongside Fabric v1.4).
4. Restart CouchDB.
To move to the new version of the Node chaincode shim a developer would need to:
1. Change the level of fabric-shim in their chaincode package.json from 1.3 to 1.4.
2. Repackage this new chaincode package and install it on all the endorsing peers in the channel.
3. Perform an upgrade to this new chaincode. To see how to do this, check out peer chaincode.
Note: This flow isn’t specific to moving from 1.3 to 1.4. It is also how one would upgrade from any incremental
version of the node fabric shim.
Note: The v1.3.0 shim is compatible with the v1.4 peer, but, it is still best practice to upgrade the chaincode shim to
match the current level of the peer.
A number of third party tools exist that will allow you to vendor a chaincode shim. If you used one of these tools, use
the same one to update your vendoring and re-package your chaincode.
If your chaincode vendors the shim, after updating the shim version, you must install it to all peers which already have
the chaincode. Install it with the same name, but a newer version. Then you should execute a chaincode upgrade on
each channel where this chaincode has been deployed to move to the new version.
If you did not vendor your chaincode, you can skip this step entirely.
This tutorial will demonstrate the use of collections to provide storage and retrieval of private data on the blockchain
network for authorized peers of organizations.
The information in this tutorial assumes knowledge of private data stores and their use cases. For more information,
check out Private data.
The tutorial will take you through the following steps to practice defining, configuring and using private data with
Fabric:
1. Build a collection definition JSON file
2. Read and Write private data using chaincode APIs
3. Install and instantiate chaincode with a collection
4. Store private data
5. Query the private data as an authorized peer
6. Query the private data as an unauthorized peer
7. Purge Private Data
8. Using indexes with private data
9. Additional resources
This tutorial will use the marbles private data sample — running on the Building Your First Network (BYFN) tutorial
network — to demonstrate how to create, deploy, and use a collection of private data. The marbles private data sample
will be deployed to the Building Your First Network (BYFN) tutorial network. You should have completed the task
Install Samples, Binaries and Docker Images; however, running the BYFN tutorial is not a prerequisite for this tutorial.
Instead the necessary commands are provided throughout this tutorial to use the network. We will describe what is
happening at each step, making it possible to understand the tutorial without actually running the sample.
The first step in privatizing data on a channel is to build a collection definition which defines access to the private data.
The collection definition describes who can persist data, how many peers the data is distributed to, how many peers are
required to disseminate the private data, and how long the private data is persisted in the private database. Later, we
will demonstrate how chaincode APIs PutPrivateData and GetPrivateData are used to map the collection
to the private data being secured.
A collection definition is composed of the following properties:
• name: Name of the collection.
• policy: Defines the organization peers allowed to persist the collection data.
• requiredPeerCount: Number of peers required to disseminate the private data as a condition of the en-
dorsement of the chaincode
• maxPeerCount: For data redundancy purposes, the number of other peers that the current endorsing peer will
attempt to distribute the data to. If an endorsing peer goes down, these other peers are available at commit time
if there are requests to pull the private data.
• blockToLive: For very sensitive information such as pricing or personal information, this value represents
how long the data should live on the private database in terms of blocks. The data will live for this specified
number of blocks on the private database and after that it will get purged, making this data obsolete from the
network. To keep private data indefinitely, that is, to never purge private data, set the blockToLive property
to 0.
• memberOnlyRead: a value of true indicates that peers automatically enforce that only clients belonging to
one of the collection member organizations are allowed read access to private data.
To illustrate usage of private data, the marbles private data example contains two private data collection defini-
tions: collectionMarbles and collectionMarblePrivateDetails. The policy property in the
collectionMarbles definition allows all members of the channel (Org1 and Org2) to have the private data in
a private database. The collectionMarblesPrivateDetails collection allows only members of Org1 to
have the private data in their private database.
For more information on building a policy definition refer to the Endorsement policies topic.
// collections_config.json
[
{
"name": "collectionMarbles",
"policy": "OR('Org1MSP.member', 'Org2MSP.member')",
"requiredPeerCount": 0,
"maxPeerCount": 3,
"blockToLive":1000000,
"memberOnlyRead": true
},
{
"name": "collectionMarblePrivateDetails",
"policy": "OR('Org1MSP.member')",
"requiredPeerCount": 0,
"maxPeerCount": 3,
"blockToLive":3,
"memberOnlyRead": true
}
]
The data to be secured by these policies is mapped in chaincode and will be shown later in the tutorial.
This collection definition file is deployed on the channel when its associated chaincode is instantiated on the channel
using the peer chaincode instantiate command. More details on this process are provided in Section 3 below.
The next step in understanding how to privatize data on a channel is to build the data definition in the chaincode. The
marbles private data sample divides the private data into two separate data definitions according to how the data will
be accessed.
// Peers in Org1 and Org2 will have this private data in a side database
type marble struct {
ObjectType string `json:"docType"`
Name string `json:"name"`
Color string `json:"color"`
Size int `json:"size"`
Owner string `json:"owner"`
}
// Only peers in Org1 will have this private data in a side database
type marblePrivateDetails struct {
ObjectType string `json:"docType"`
Name string `json:"name"`
Price int `json:"price"`
}
Use the chaincode API GetPrivateData() to query private data in the database. GetPrivateData()
takes two arguments, the collection name and the data key. Recall the collection collectionMarbles
allows members of Org1 and Org2 to have the private data in a side database, and the collection
collectionMarblePrivateDetails allows only members of Org1 to have the private data in a side database.
For implementation details refer to the following two marbles private data functions:
• readMarble for querying the values of the name, color, size and owner attributes
• readMarblePrivateDetails for querying the values of the price attribute
When we issue the database queries using the peer commands later in this tutorial, we will call these two functions.
Use the chaincode API PutPrivateData() to store the private data into the private database. The API also requires
the name of the collection. Since the marbles private data sample includes two different collections, it is called twice
in the chaincode:
1. Write the private data name, color, size and owner using the collection named
collectionMarbles.
2. Write the private data price using the collection named collectionMarblePrivateDetails.
For example, in the following snippet of the initMarble function, PutPrivateData() is called twice, once for
each set of private data.
// ==== Create marble object, marshal to JSON, and save to state ====
marble := &marble{
ObjectType: "marble",
Name: marbleInput.Name,
Color: marbleInput.Color,
Size: marbleInput.Size,
Owner: marbleInput.Owner,
(continues on next page)
if err != nil {
return shim.Error(err.Error())
}
// ==== Create marble private details object with price, marshal to JSON, and
˓→save to state ====
marblePrivateDetails := &marblePrivateDetails{
ObjectType: "marblePrivateDetails",
Name: marbleInput.Name,
Price: marbleInput.Price,
}
marblePrivateDetailsBytes, err := json.Marshal(marblePrivateDetails)
if err != nil {
return shim.Error(err.Error())
}
err = stub.PutPrivateData("collectionMarblePrivateDetails", marbleInput.Name,
˓→marblePrivateDetailsBytes)
if err != nil {
return shim.Error(err.Error())
}
To summarize, the policy definition above for our collection.json allows all peers in Org1 and Org2 to store
and transact with the marbles private data name, color, size, owner in their private database. But only peers
in Org1 can store and transact with the price private data in its private database.
As an additional data privacy benefit, since a collection is being used, only the private data hashes go through orderer,
not the private data itself, keeping private data confidential from orderer.
Now we are ready to step through some commands which demonstrate using private data.
Try it yourself
Before installing and instantiating the marbles private data chaincode below, we need to start the BYFN
network. For the sake of this tutorial, we want to operate from a known initial state. The following com-
mand will kill any active or stale docker containers and remove previously generated artifacts. Therefore
let’s run the following command to clean up any previous environments:
cd fabric-samples/first-network
./byfn.sh down
If you’ve already run through this tutorial, you’ll also want to delete the underlying docker containers for
the marbles private data chaincode. Let’s run the following commands to clean up previous environments:
Start up the BYFN network with CouchDB by running the following command:
This will create a simple Fabric network consisting of a single channel named mychannel with two
organizations (each maintaining two peer nodes) and an ordering service while using CouchDB as the
state database. Either LevelDB or CouchDB may be used with collections. CouchDB was chosen to
demonstrate how to use indexes with private data.
Note: For collections to work, it is important to have cross organizational gossip configured correctly.
Refer to our documentation on Gossip data dissemination protocol, paying particular attention to the
section on “anchor peers”. Our tutorial does not focus on gossip given it is already configured in the
BYFN sample, but when configuring a channel, the gossip anchors peers are critical to configure for
collections to work properly.
Client applications interact with the blockchain ledger through chaincode. As such we need to install and instantiate
the chaincode on every peer that will execute and endorse our transactions. Chaincode is installed onto a peer and then
instantiated onto the channel using peer-commands.
As discussed above, the BYFN network includes two organizations, Org1 and Org2, with two peers each. Therefore
the chaincode has to be installed on four peers:
• peer0.org1.example.com
• peer1.org1.example.com
• peer0.org2.example.com
• peer1.org2.example.com
Use the peer chaincode install command to install the Marbles chaincode on each peer.
Try it yourself
Assuming you have started the BYFN network, enter the CLI container.
2. Use the CLI to switch the active peer to the second peer in Org1 and install the chaincode. Copy and
paste the following entire block of commands into the CLI container and run them.
export CORE_PEER_ADDRESS=peer1.org1.example.com:7051
peer chaincode install -n marblesp -v 1.0 -p github.com/chaincode/
˓→marbles02_private/go/
3. Use the CLI to switch to Org2. Copy and paste the following block of commands as a group into the
peer container and run them all at once.
export CORE_PEER_LOCALMSPID=Org2MSP
export PEER0_ORG2_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/
˓→crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/
˓→tls/ca.crt
export CORE_PEER_TLS_ROOTCERT_FILE=$PEER0_ORG2_CA
export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/
˓→fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.
˓→example.com/msp
4. Switch the active peer to the first peer in Org2 and install the chaincode:
export CORE_PEER_ADDRESS=peer0.org2.example.com:7051
peer chaincode install -n marblesp -v 1.0 -p github.com/chaincode/
˓→marbles02_private/go/
5. Switch the active peer to the second peer in org2 and install the chaincode:
export CORE_PEER_ADDRESS=peer1.org2.example.com:7051
peer chaincode install -n marblesp -v 1.0 -p github.com/chaincode/
˓→marbles02_private/go/
Use the peer chaincode instantiate command to instantiate the marbles chaincode on a channel. To configure the chain-
code collections on the channel, specify the flag --collections-config along with the name of the collections
JSON file, collections_config.json in our example.
Try it yourself
Run the following commands to instantiate the marbles private data chaincode on the BYFN channel
mychannel.
export ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→ordererOrganizations/example.com/orderers/orderer.example.com/msp/
˓→tlscacerts/tlsca.example.com-cert.pem
Note: When specifying the value of the --collections-config flag, you will
need to specify the fully qualified path to the collections_config.json file. For ex-
ample: --collections-config $GOPATH/src/github.com/chaincode/
marbles02_private/collections_config.json
When the instantiation completes successfully you should see something similar to:
Acting as a member of Org1, who is authorized to transact with all of the private data in the marbles private data
sample, switch back to an Org1 peer and submit a request to add a marble:
Try it yourself
Copy and paste the following set of commands to the CLI command line.
export CORE_PEER_ADDRESS=peer0.org1.example.com:7051
export CORE_PEER_LOCALMSPID=Org1MSP
export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/
˓→fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.
˓→example.com/tls/ca.crt
export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/
˓→peer/crypto/peerOrganizations/org1.example.com/users/[email protected].
˓→com/msp
export PEER0_ORG1_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/
˓→crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/
˓→ca.crt
Invoke the marbles initMarble function which creates a marble with private data — name marble1
owned by tom with a color blue, size 35 and price of 99. Recall that private data price will be stored
separately from the private data name, owner, color, size. For this reason, the initMarble function
calls the PutPrivateData() API twice to persist the private data, once for each collection. Also note
that the private data is passed using the --transient flag. Inputs passed as transient data will not
be persisted in the transaction in order to keep the data private. Transient data is passed as binary data
and therefore when using CLI it must be base64 encoded. We use an environment variable to capture
the base64 encoded value, and use tr command to strip off the problematic newline characters that linux
base64 command adds.
˓→com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
Our collection definition allows all members of Org1 and Org2 to have the name, color, size, owner private
data in their side database, but only peers in Org1 can have the price private data in their side database. As an
authorized peer in Org1, we will query both sets of private data.
The first query command calls the readMarble function which passes collectionMarbles as an argument.
// ===============================================
// readMarble - read a marble from chaincode state
// ===============================================
name = args[0]
valAsbytes, err := stub.GetPrivateData("collectionMarbles", name) //get the
˓→marble from chaincode state
if err != nil {
jsonResp = "{\"Error\":\"Failed to get state for " + name + "\"}"
return shim.Error(jsonResp)
} else if valAsbytes == nil {
jsonResp = "{\"Error\":\"Marble does not exist: " + name + "\"}"
return shim.Error(jsonResp)
}
return shim.Success(valAsbytes)
}
The second query command calls the readMarblePrivateDetails function which passes
collectionMarblePrivateDetails as an argument.
// ===============================================
// readMarblePrivateDetails - read a marble private details from chaincode state
// ===============================================
if len(args) != 1 {
return shim.Error("Incorrect number of arguments. Expecting name of the
˓→marble to query")
name = args[0]
(continues on next page)
if err != nil {
jsonResp = "{\"Error\":\"Failed to get private details for " + name + ":
˓→" + err.Error() + "\"}"
return shim.Error(jsonResp)
} else if valAsbytes == nil {
jsonResp = "{\"Error\":\"Marble private details does not exist: " + name
˓→+ "\"}"
return shim.Error(jsonResp)
}
return shim.Success(valAsbytes)
}
Now we will switch to a member of Org2 which has the marbles private data name, color, size, owner in
its side database, but does not have the marbles price private data in its side database. We will query for both sets of
private data.
From inside the docker container, run the following commands to switch to the peer which is unauthorized to access
the marbles price private data.
Try it yourself
export CORE_PEER_ADDRESS=peer0.org2.example.com:7051
export CORE_PEER_LOCALMSPID=Org2MSP
export PEER0_ORG2_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/
˓→crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/
˓→ca.crt
(continues on next page)
˓→com/msp
Peers in Org2 should have the first set of marbles private data (name, color, size and owner) in their side
database and can access it using the readMarble() function which is called with the collectionMarbles
argument.
Try it yourself
peer chaincode query -C mychannel -n marblesp -c '{"Args":["readMarble",
˓→"marble1"]}'
Peers in Org2 do not have the marbles price private data in their side database. When they try to query for this data,
they get back a hash of the key matching the public state but will not have the private state.
Try it yourself
peer chaincode query -C mychannel -n marblesp -c '{"Args":[
˓→"readMarblePrivateDetails","marble1"]}'
Private data matching public hash version is not available. Public hash
version = &version.Height{BlockNum:0x6, TxNum:0x0}, Private data version =
(*version.Height)(nil)"}
Members of Org2 will only be able to see the public hash of the private data.
For use cases where private data only needs to be on the ledger until it can be replicated into an off-chain database, it
is possible to “purge” the data after a certain set number of blocks, leaving behind only hash of the data that serves as
immutable evidence of the transaction.
There may be private data including personal or confidential information, such as the pricing data in our example, that
the transacting parties don’t want disclosed to other organizations on the channel. Thus, it has a limited lifespan, and
can be purged after existing unchanged on the blockchain for a designated number of blocks using the blockToLive
property in the collection definition.
Our collectionMarblePrivateDetails definition has a blockToLive property value of three meaning
this data will live on the side database for three blocks and then after that it will get purged. Tying all of the
pieces together, recall this collection definition collectionMarblePrivateDetails is associated with the
price private data in the initMarble() function when it calls the PutPrivateData() API and passes the
collectionMarblePrivateDetails as an argument.
We will step through adding blocks to the chain, and then watch the price information get purged by issuing four new
transactions (Create a new marble, followed by three marble transfers) which adds four new blocks to the chain. After
the fourth transaction (third marble transfer), we will verify that the price private data is purged.
Try it yourself
Switch back to peer0 in Org1 using the following commands. Copy and paste the following code block
and run it inside your peer container:
export CORE_PEER_ADDRESS=peer0.org1.example.com:7051
export CORE_PEER_LOCALMSPID=Org1MSP
export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/
˓→fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.
˓→example.com/tls/ca.crt
export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/
˓→peer/crypto/peerOrganizations/org1.example.com/users/[email protected].
˓→com/msp
export PEER0_ORG1_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/
˓→crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/
˓→ca.crt
Open a new terminal window and view the private data logs for this peer by running the following com-
mand:
You should see results similar to the following. Note the highest block number in the list. In the example
below, the highest block height is 4.
[pvtdatastorage] func1 -> INFO 023 Purger started: Purging expired private
˓→data till block number [0]
Back in the peer container, query for the marble1 price data by running the following command. (A
Query does not create a new transaction on the ledger since no data is transacted).
{"docType":"marblePrivateDetails","name":"marble1","price":99}
Create a new marble2 by issuing the following command. This transaction creates a new block on the
chain.
export MARBLE=$(echo -n "{\"name\":\"marble2\",\"color\":\"blue\",\"size\
˓→":35,\"owner\":\"tom\",\"price\":99}" | base64 | tr -d \\n)
˓→com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
˓→"marble\":\"$MARBLE\"}"
Switch back to the Terminal window and view the private data logs for this peer again. You should see
the block height increase by 1.
docker logs peer0.org1.example.com 2>&1 | grep -i -a -E 'private|pvt|privdata
˓→'
Back in the peer container, query for the marble1 price data again by running the following command:
peer chaincode query -C mychannel -n marblesp -c '{"Args":[
˓→"readMarblePrivateDetails","marble1"]}'
The private data has not been purged, therefore the results are unchanged from previous query:
{"docType":"marblePrivateDetails","name":"marble1","price":99}
Transfer marble2 to “joe” by running the following command. This transaction will add a second new
block on the chain.
export MARBLE_OWNER=$(echo -n "{\"name\":\"marble2\",\"owner\":\"joe\"}" |
˓→base64 | tr -d \\n)
˓→com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
˓→"marble_owner\":\"$MARBLE_OWNER\"}"
Switch back to the Terminal window and view the private data logs for this peer again. You should see
the block height increase by 1.
docker logs peer0.org1.example.com 2>&1 | grep -i -a -E 'private|pvt|privdata
˓→'
Back in the peer container, query for the marble1 price data by running the following command:
peer chaincode query -C mychannel -n marblesp -c '{"Args":[
˓→"readMarblePrivateDetails","marble1"]}'
Transfer marble2 to “tom” by running the following command. This transaction will create a third new
block on the chain.
export MARBLE_OWNER=$(echo -n "{\"name\":\"marble2\",\"owner\":\"tom\"}" |
˓→base64 | tr -d \\n)
˓→com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
Switch back to the Terminal window and view the private data logs for this peer again. You should see
the block height increase by 1.
docker logs peer0.org1.example.com 2>&1 | grep -i -a -E 'private|pvt|privdata
˓→'
Back in the peer container, query for the marble1 price data by running the following command:
peer chaincode query -C mychannel -n marblesp -c '{"Args":[
˓→"readMarblePrivateDetails","marble1"]}'
Finally, transfer marble2 to “jerry” by running the following command. This transaction will create a
fourth new block on the chain. The price private data should be purged after this transaction.
export MARBLE_OWNER=$(echo -n "{\"name\":\"marble2\",\"owner\":\"jerry\"}" |
˓→base64 | tr -d \\n)
˓→com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
˓→"marble_owner\":\"$MARBLE_OWNER\"}"
Switch back to the Terminal window and view the private data logs for this peer again. You should see
the block height increase by 1.
docker logs peer0.org1.example.com 2>&1 | grep -i -a -E 'private|pvt|privdata
˓→'
Back in the peer container, query for the marble1 price data by running the following command:
peer chaincode query -C mychannel -n marblesp -c '{"Args":[
˓→"readMarblePrivateDetails","marble1"]}'
Because the price data has been purged, you should no longer be able to see it. You should see something
similar to:
Error: endorsement failure during query. response: status:500
message:"{\"Error\":\"Marble private details does not exist: marble1\"}"
Indexes can also be applied to private data collections, by packaging indexes in the META-INF/statedb/
couchdb/collections/<collection_name>/indexes directory alongside the chaincode. An example
index is available here .
For deployment of chaincode to production environments, it is recommended to define any indexes alongside chain-
code so that the chaincode and supporting indexes are deployed automatically as a unit, once the chaincode has been
installed on a peer and instantiated on a channel. The associated indexes are automatically deployed upon chaincode
instantiation on the channel when the --collections-config flag is specified pointing to the location of the
collection JSON file.
For additional private data education, a video tutorial has been created.
Chaincode is a program, written in Go, node.js, or Java that implements a prescribed interface. Chaincode runs in a
secured Docker container isolated from the endorsing peer process. Chaincode initializes and manages ledger state
through transactions submitted by applications.
A chaincode typically handles business logic agreed to by members of the network, so it may be considered as a
“smart contract”. State created by a chaincode is scoped exclusively to that chaincode and can’t be accessed directly
by another chaincode. However, within the same network, given the appropriate permission a chaincode may invoke
another chaincode to access its state.
We offer two different perspectives on chaincode. One, from the perspective of an application developer developing
a blockchain application/solution entitled Chaincode for Developers, and the other, Chaincode for Operators oriented
to the blockchain network operator who is responsible for managing a blockchain network, and who would leverage
the Hyperledger Fabric API to install, instantiate, and upgrade chaincode, but would likely not be involved in the
development of a chaincode application.
Chaincode is a program, written in Go, node.js, or Java that implements a prescribed interface. Chaincode runs in a
secured Docker container isolated from the endorsing peer process. Chaincode initializes and manages the ledger state
through transactions submitted by applications.
A chaincode typically handles business logic agreed to by members of the network, so it similar to a “smart contract”.
A chaincode can be invoked to update or query the ledger in a proposal transaction. Given the appropriate permission,
a chaincode may invoke another chaincode, either in the same channel or in different channels, to access its state. Note
that, if the called chaincode is on a different channel from the calling chaincode, only read query is allowed. That is,
the called chaincode on a different channel is only a Query, which does not participate in state validation checks in
subsequent commit phase.
In the following sections, we will explore chaincode through the eyes of an application developer. We’ll present a
simple chaincode sample application and walk through the purpose of each method in the Chaincode Shim API.
• Java
whose methods are called in response to received transactions. In particular the Init method is called when a
chaincode receives an instantiate or upgrade transaction so that the chaincode may perform any necessary
initialization, including initialization of application state. The Invoke method is called in response to receiving an
invoke transaction to process transaction proposals.
The other interface in the chaincode “shim” APIs is the ChaincodeStubInterface:
• Go
• node.js
• Java
which is used to access and modify the ledger, and to make invocations between chaincodes.
In this tutorial using Go chaincode, we will demonstrate the use of these APIs by implementing a simple chaincode
application that manages simple “assets”.
Our application is a basic sample chaincode to create assets (key-value pairs) on the ledger.
If you haven’t been doing programming in Go, you may want to make sure that you have Go Programming Language
installed and your system properly configured.
Now, you will want to create a directory for your chaincode application as a child directory of $GOPATH/src/.
To keep things simple, let’s use the following command:
Now, let’s create the source file that we’ll fill in with code:
touch sacc.go
Housekeeping
First, let’s start with some housekeeping. As with every chaincode, it implements the Chaincode interface in particular,
Init and Invoke functions. So, let’s add the Go import statements for the necessary dependencies for our chaincode.
We’ll import the chaincode shim package and the peer protobuf package. Next, let’s add a struct SimpleAsset as a
receiver for Chaincode shim functions.
package main
import (
"fmt"
"github.com/hyperledger/fabric/core/chaincode/shim"
"github.com/hyperledger/fabric/protos/peer"
)
Note: Note that chaincode upgrade also calls this function. When writing a chaincode that will upgrade an existing
one, make sure to modify the Init function appropriately. In particular, provide an empty “Init” method if there’s no
“migration” or nothing to be initialized as part of the upgrade.
Next, we’ll retrieve the arguments to the Init call using the ChaincodeStubInterface.GetStringArgs function and
check for validity. In our case, we are expecting a key-value pair.
// Init is called during chaincode instantiation to initialize any
// data. Note that chaincode upgrade also calls this function to reset
// or to migrate data, so be careful to avoid a scenario where you
// inadvertently clobber your ledger's data!
func (t *SimpleAsset) Init(stub shim.ChaincodeStubInterface) peer.Response {
// Get the args from the transaction proposal
args := stub.GetStringArgs()
if len(args) != 2 {
return shim.Error("Incorrect arguments. Expecting a key and a value")
}
}
Next, now that we have established that the call is valid, we’ll store the initial state in the ledger. To do this, we will
call ChaincodeStubInterface.PutState with the key and value passed in as the arguments. Assuming all went well,
return a peer.Response object that indicates the initialization was a success.
// Init is called during chaincode instantiation to initialize any
// data. Note that chaincode upgrade also calls this function to reset
// or to migrate data, so be careful to avoid a scenario where you
// inadvertently clobber your ledger's data!
func (t *SimpleAsset) Init(stub shim.ChaincodeStubInterface) peer.Response {
// Get the args from the transaction proposal
args := stub.GetStringArgs()
if len(args) != 2 {
return shim.Error("Incorrect arguments. Expecting a key and a value")
}
As with the Init function above, we need to extract the arguments from the ChaincodeStubInterface. The
Invoke function’s arguments will be the name of the chaincode application function to invoke. In our case, our
application will simply have two functions: set and get, that allow the value of an asset to be set or its current state
to be retrieved. We first call ChaincodeStubInterface.GetFunctionAndParameters to extract the function name and the
parameters to that chaincode application function.
Next, we’ll validate the function name as being either set or get, and invoke those chaincode application functions,
returning an appropriate response via the shim.Success or shim.Error functions that will serialize the response
into a gRPC protobuf message.
As noted, our chaincode application implements two functions that can be invoked via the Invoke function. Let’s
implement those functions now. Note that as we mentioned above, to access the ledger’s state, we will leverage the
ChaincodeStubInterface.PutState and ChaincodeStubInterface.GetState functions of the chaincode shim API.
// Set stores the asset (both key and value) on the ledger. If the key exists,
// it will override the value with the new one
func set(stub shim.ChaincodeStubInterface, args []string) (string, error) {
if len(args) != 2 {
return "", fmt.Errorf("Incorrect arguments. Expecting a key and a value")
}
}
if value == nil {
return "", fmt.Errorf("Asset not found: %s", args[0])
}
return string(value), nil
}
Finally, we need to add the main function, which will call the shim.Start function. Here’s the whole chaincode
program source.
package main
import (
"fmt"
"github.com/hyperledger/fabric/core/chaincode/shim"
"github.com/hyperledger/fabric/protos/peer"
)
// Set stores the asset (both key and value) on the ledger. If the key exists,
// it will override the value with the new one
func set(stub shim.ChaincodeStubInterface, args []string) (string, error) {
if len(args) != 2 {
return "", fmt.Errorf("Incorrect arguments. Expecting a key and a value")
}
}
if value == nil {
return "", fmt.Errorf("Asset not found: %s", args[0])
}
return string(value), nil
}
Building Chaincode
go get -u github.com/hyperledger/fabric/core/chaincode/shim
go build
Assuming there are no errors, now we can proceed to the next step, testing your chaincode.
Normally chaincodes are started and maintained by peer. However in “dev mode”, chaincode is built and started by
the user. This mode is useful during chaincode development phase for rapid code/build/run/debug cycle turnaround.
We start “dev mode” by leveraging pre-generated orderer and channel artifacts for a sample dev network. As such, the
user can immediately jump into the process of compiling chaincode and driving calls.
If you haven’t already done so, please Install Samples, Binaries and Docker Images.
Navigate to the chaincode-docker-devmode directory of the fabric-samples clone:
cd chaincode-docker-devmode
Now open three terminals and navigate to your chaincode-docker-devmode directory in each.
docker-compose -f docker-compose-simple.yaml up
The above starts the network with the SingleSampleMSPSolo orderer profile and launches the peer in “dev mode”.
It also launches two additional containers - one for the chaincode environment and a CLI to interact with the chaincode.
The commands for create and join channel are embedded in the CLI container, so we can jump immediately to the
chaincode calls.
root@d2629980e76b:/opt/gopath/src/chaincode#
cd sacc
go build
The chaincode is started with peer and chaincode logs indicating successful registration with the peer. Note that at this
stage the chaincode is not associated with any channel. This is done in subsequent steps using the instantiate
command.
Even though you are in --peer-chaincodedev mode, you still have to install the chaincode so the life-
cycle system chaincode can go through its checks normally. This requirement may be removed in future when in
--peer-chaincodedev mode.
We’ll leverage the CLI container to drive these calls.
By default, we mount only sacc. However, you can easily test different chaincodes by adding them to the
chaincode subdirectory and relaunching your network. At this point they will be accessible in your chaincode
container.
Chaincode can utilize the client (submitter) certificate for access control decisions by calling the GetCreator() function.
Additionally the Go shim provides extension APIs that extract client identity from the submitter’s certificate that can
be used for access control decisions, whether that is based on client identity itself, or the org identity, or on a client
identity attribute.
For example an asset that is represented as a key/value may include the client’s identity as part of the value (for
example as a JSON attribute indicating that asset owner), and only this client may be authorized to make updates to
the key/value in the future. The client identity library extension APIs can be used within chaincode to retrieve this
submitter information to make such access control decisions.
See the client identity (CID) library documentation for more details.
To add the client identity shim extension to your chaincode as a dependency, see Managing external dependencies for
chaincode written in Go.
In certain scenarios, it may be useful to encrypt values associated with a key in their entirety or simply in part. For
example, if a person’s social security number or address was being written to the ledger, then you likely would not
want this data to appear in plaintext. Chaincode encryption is achieved by leveraging the entities extension which is a
BCCSP wrapper with commodity factories and functions to perform cryptographic operations such as encryption and
elliptic curve digital signatures. For example, to encrypt, the invoker of a chaincode passes in a cryptographic key via
the transient field. The same key may then be used for subsequent query operations, allowing for proper decryption of
the encrypted state values.
For more information and samples, see the Encc Example within the fabric/examples directory. Pay specific
attention to the utils.go helper program. This utility loads the chaincode shim APIs and Entities extension and
builds a new class of functions (e.g. encryptAndPutState & getStateAndDecrypt) that the sample encryp-
tion chaincode then leverages. As such, the chaincode can now marry the basic shim APIs of Get and Put with the
added functionality of Encrypt and Decrypt.
To add the encryption entities extension to your chaincode as a dependency, see Managing external dependencies for
chaincode written in Go.
If your chaincode requires packages not provided by the Go standard library, you will need to include those packages
with your chaincode. It is also a good practice to add the shim and any extension libraries to your chaincode as a
dependency.
There are many tools available for managing (or “vendoring”) these dependencies. The following demonstrates how
to use govendor:
govendor init
govendor add +external // Add all external package, or
govendor add github.com/external/pkg // Add specific external package
This imports the external dependencies into a local vendor directory. If you are vendoring the Fabric shim or shim
extensions, clone the Fabric repository to your $GOPATH/src/github.com/hyperledger directory, before executing the
govendor commands.
Once dependencies are vendored in your chaincode directory, peer chaincode package and peer
chaincode install operations will then include code associated with the dependencies into the chaincode pack-
age.
Chaincode is a program, written in Go, node.js, or Java that implements a prescribed interface. Chaincode runs in a
secured Docker container isolated from the endorsing peer process. Chaincode initializes and manages ledger state
through transactions submitted by applications.
A chaincode typically handles business logic agreed to by members of the network, so it may be considered as a
“smart contract”. State created by a chaincode is scoped exclusively to that chaincode and can’t be accessed directly
by another chaincode. However, within the same network, given the appropriate permission a chaincode may invoke
another chaincode to access its state.
In the following sections, we will explore chaincode through the eyes of a blockchain network operator, Noah. For
Noah’s interests, we will focus on chaincode lifecycle operations; the process of packaging, installing, instantiating
and upgrading the chaincode as a function of the chaincode’s operational lifecycle within a blockchain network.
The Hyperledger Fabric API enables interaction with the various nodes in a blockchain network - the peers, orderers
and MSPs - and it also allows one to package, install, instantiate and upgrade chaincode on the endorsing peer nodes.
The Hyperledger Fabric language-specific SDKs abstract the specifics of the Hyperledger Fabric API to facilitate
application development, though it can be used to manage a chaincode’s lifecycle. Additionally, the Hyperledger
Fabric API can be accessed directly via the CLI, which we will use in this document.
We provide four commands to manage a chaincode’s lifecycle: package, install, instantiate, and
upgrade. In a future release, we are considering adding stop and start transactions to disable and re-enable
a chaincode without having to actually uninstall it. After a chaincode has been successfully installed and instantiated,
the chaincode is active (running) and can process transactions via the invoke transaction. A chaincode may be
upgraded any time after it has been installed.
7.9.3 Packaging
There are two approaches to packaging chaincode. One for when you want to have multiple owners of a chaincode,
and hence need to have the chaincode package signed by multiple identities. This workflow requires that we initially
create a signed chaincode package (a SignedCDS) which is subsequently passed serially to each of the other owners
for signing.
The simpler workflow is for when you are deploying a SignedCDS that has only the signature of the identity of the
node that is issuing the install transaction.
We will address the more complex case first. However, you may skip ahead to the Installing chaincode section below
if you do not need to worry about multiple owners just yet.
To create a signed chaincode package, use the following command:
The -s option creates a package that can be signed by multiple owners as opposed to simply creating a raw CDS.
When -s is specified, the -S option must also be specified if other owners are going to need to sign. Otherwise, the
process will create a SignedCDS that includes only the instantiation policy in addition to the CDS.
The -S option directs the process to sign the package using the MSP identified by the value of the localMspid
property in core.yaml.
The -S option is optional. However if a package is created without a signature, it cannot be signed by any other owner
using the signpackage command.
The optional -i option allows one to specify an instantiation policy for the chaincode. The instantiation policy has
the same format as an endorsement policy and specifies which identities can instantiate the chaincode. In the example
above, only the admin of OrgA is allowed to instantiate the chaincode. If no policy is provided, the default policy is
used, which only allows the admin identity of the peer’s MSP to instantiate chaincode.
Package signing
A chaincode package that was signed at creation can be handed over to other owners for inspection and signing. The
workflow supports out-of-band signing of chaincode package.
The ChaincodeDeploymentSpec may be optionally be signed by the collective owners to create a SignedChaincod-
eDeploymentSpec (or SignedCDS). The SignedCDS contains 3 elements:
1. The CDS contains the source code, the name, and version of the chaincode.
2. An instantiation policy of the chaincode, expressed as endorsement policies.
3. The list of chaincode owners, defined by means of Endorsement.
Note: Note that this endorsement policy is determined out-of-band to provide proper MSP principals when the
chaincode is instantiated on some channels. If the instantiation policy is not specified, the default policy is any MSP
administrator of the channel.
Each owner endorses the ChaincodeDeploymentSpec by combining it with that owner’s identity (e.g. certificate) and
signing the combined result.
A chaincode owner can sign a previously created signed package using the following command:
Where ccpack.out and signedccpack.out are the input and output packages, respectively.
signedccpack.out contains an additional signature over the package signed using the Local MSP.
Installing chaincode
The install transaction packages a chaincode’s source code into a prescribed format called a
ChaincodeDeploymentSpec (or CDS) and installs it on a peer node that will run that chaincode.
Note: You must install the chaincode on each endorsing peer node of a channel that will run your chaincode.
When the install API is given simply a ChaincodeDeploymentSpec, it will default the instantiation policy
and include an empty owner list.
Note: Chaincode should only be installed on endorsing peer nodes of the owning members of the chaincode to protect
the confidentiality of the chaincode logic from other members on the network. Those members without the chaincode,
can’t be the endorsers of the chaincode’s transactions; that is, they can’t execute the chaincode. However, they can
still validate and commit the transactions to the ledger.
To install a chaincode, send a SignedProposal to the lifecycle system chaincode (LSCC) described in
the System Chaincode section. For example, to install the sacc sample chaincode described in section Simple Asset
Chaincode using the CLI, the command would look like the following:
The CLI internally creates the SignedChaincodeDeploymentSpec for sacc and sends it to the local peer, which calls
the Install method on the LSCC. The argument to the -p option specifies the path to the chaincode, which must
be located within the source tree of the user’s GOPATH, e.g. $GOPATH/src/sacc. Note if using -l node or
-l java for node chaincode or java chaincode, use -p with the absolute path of the chaincode location. See the
Commands Reference for a complete description of the command options.
Note that in order to install on a peer, the signature of the SignedProposal must be from 1 of the peer’s local MSP
administrators.
Instantiate
The instantiate transaction invokes the lifecycle System Chaincode (LSCC) to create and initialize a
chaincode on a channel. This is a chaincode-channel binding process: a chaincode may be bound to any number of
channels and operate on each channel individually and independently. In other words, regardless of how many other
channels on which a chaincode might be installed and instantiated, state is kept isolated to the channel to which a
transaction is submitted.
The creator of an instantiate transaction must satisfy the instantiation policy of the chaincode included in Signed-
CDS and must also be a writer on the channel, which is configured as part of the channel creation. This is important
for the security of the channel to prevent rogue entities from deploying chaincodes or tricking members to execute
chaincodes on an unbound channel.
For example, recall that the default instantiation policy is any channel MSP administrator, so the creator of a chaincode
instantiate transaction must be a member of the channel administrators. When the transaction proposal arrives at the
endorser, it verifies the creator’s signature against the instantiation policy. This is done again during the transaction
validation before committing it to the ledger.
The instantiate transaction also sets up the endorsement policy for that chaincode on the channel. The endorsement
policy describes the attestation requirements for the transaction result to be accepted by members of the channel.
For example, using the CLI to instantiate the sacc chaincode and initialize the state with john and 0, the command
would look like the following:
Note: Note the endorsement policy (CLI uses polish notation), which requires an endorsement from both a member
of Org1 and Org2 for all transactions to sacc. That is, both Org1 and Org2 must sign the result of executing the Invoke
on sacc for the transactions to be valid.
After being successfully instantiated, the chaincode enters the active state on the channel and is ready to process any
transaction proposals of type ENDORSER_TRANSACTION. The transactions are processed concurrently as they
arrive at the endorsing peer.
Upgrade
A chaincode may be upgraded any time by changing its version, which is part of the SignedCDS. Other parts, such as
owners and instantiation policy are optional. However, the chaincode name must be the same; otherwise it would be
considered as a totally different chaincode.
Prior to upgrade, the new version of the chaincode must be installed on the required endorsers. Upgrade is a transaction
similar to the instantiate transaction, which binds the new version of the chaincode to the channel. Other channels
bound to the old version of the chaincode still run with the old version. In other words, the upgrade transaction only
affects one channel at a time, the channel to which the transaction is submitted.
Note: Note that since multiple versions of a chaincode may be active simultaneously, the upgrade process doesn’t
automatically remove the old versions, so user must manage this for the time being.
There’s one subtle difference with the instantiate transaction: the upgrade transaction is checked against the
current chaincode instantiation policy, not the new policy (if specified). This is to ensure that only existing members
specified in the current instantiation policy may upgrade the chaincode.
Note: Note that during upgrade, the chaincode Init function is called to perform any data related updates or
re-initialize it, so care must be taken to avoid resetting states when upgrading chaincode.
Note that stop and start lifecycle transactions have not yet been implemented. However, you may stop a chaincode
manually by removing the chaincode container and the SignedCDS package from each of the endorsers. This is done
by deleting the chaincode’s container on each of the hosts or virtual machines on which the endorsing peer nodes are
running, and then deleting the SignedCDS from each of the endorsing peer nodes:
Note: TODO - in order to delete the CDS from the peer node, you would need to enter the peer node’s container,
first. We really need to provide a utility script that can do this.
Stop would be useful in the workflow for doing upgrade in controlled manner, where a chaincode can be stopped on a
channel on all peers before issuing an upgrade.
System chaincode has the same programming model except that it runs within the peer process rather than in an
isolated container like normal chaincode. Therefore, system chaincode is built into the peer executable and doesn’t
follow the same lifecycle described above. In particular, install, instantiate and upgrade do not apply to system
chaincodes.
The purpose of system chaincode is to shortcut gRPC communication cost between peer and chaincode, and tradeoff
the flexibility in management. For example, a system chaincode can only be upgraded with the peer binary. It must
also register with a fixed set of parameters compiled in and doesn’t have endorsement policies or endorsement policy
functionality.
System chaincode is used in Hyperledger Fabric to implement a number of system behaviors so that they can be
replaced or modified as appropriate by a system integrator.
The current list of system chaincodes:
1. LSCC Lifecycle system chaincode handles lifecycle requests described above.
2. CSCC Configuration system chaincode handles channel configuration on the peer side.
3. QSCC Query system chaincode provides ledger query APIs such as getting blocks and transactions.
The former system chaincodes for endorsement and validation have been replaced by the pluggable endorsement and
validation function as described by the Pluggable transaction endorsement and validation documentation.
Extreme care must be taken when modifying or replacing these system chaincodes, especially LSCC.
System chaincodes are specialized chaincodes that run as part of the peer process as opposed to user chaincodes
that run in separate docker containers. As such they have more access to resources in the peer and can be used
for implementing features that are difficult or impossible to be implemented through user chaincodes. Examples of
System Chaincodes include QSCC (Query System Chaincode) for ledger and other Fabric-related queries, CSCC
(Configuration System Chaincode) which helps regulate access control, and LSCC (Lifecycle System Chaincode).
Unlike a user chaincode, a system chaincode is not installed and instantiated using proposals from SDKs or CLI. It is
registered and deployed by the peer at start-up.
System chaincodes can be linked to a peer in two ways: statically, and dynamically using Go plugins. This tutorial
will outline how to develop and load system chaincodes as plugins.
A system chaincode is a program written in Go and loaded using the Go plugin package.
A plugin includes a main package with exported symbols and is built with the command go build
-buildmode=plugin.
Every system chaincode must implement the Chaincode Interface and export a constructor method that matches the
signature func New() shim.Chaincode in the main package. An example can be found in the repository at
examples/plugin/scc.
Existing chaincodes such as the QSCC can also serve as templates for certain features, such as access control, that
are typically implemented through system chaincodes. The existing system chaincodes also serve as a reference for
best-practices on things like logging and testing.
Note: On imported packages: the Go standard library requires that a plugin must include the same version of imported
packages as the host application (Fabric, in this case).
chaincode:
systemPlugins:
- enabled: true
name: mysyscc
path: /opt/lib/syscc.so
invokableExternal: true
invokableCC2CC: true
chaincode:
system:
mysyscc: enable
This tutorial will describe the steps required to use the CouchDB as the state database with Hyperledger Fabric. By
now, you should be familiar with Fabric concepts and have explored some of the samples and tutorials.
The tutorial will take you through the following steps:
1. Enable CouchDB in Hyperledger Fabric
2. Create an index
3. Add the index to your chaincode folder
4. Install and instantiate the Chaincode
5. Query the CouchDB State Database
6. Query the CouchDB State Database With Pagination
7. Update an Index
8. Delete an Index
For a deeper dive into CouchDB refer to CouchDB as the State Database and for more information on the Fabric
ledger refer to the Ledger topic. Follow the tutorial below for details on how to leverage CouchDB in your blockchain
network.
Throughout this tutorial we will use the Marbles sample as our use case to demonstrate how to use CouchDB with
Fabric and will deploy Marbles to the Building Your First Network (BYFN) tutorial network. You should have com-
pleted the task Install Samples, Binaries and Docker Images. However, running the BYFN tutorial is not a prerequisite
for this tutorial, instead the necessary commands are provided throughout this tutorial to use the network.
Fabric supports two types of peer databases. LevelDB is the default state database embedded in the peer node and
stores chaincode data as simple key-value pairs and supports key, key range, and composite key queries only. CouchDB
is an optional alternate state database that supports rich queries when chaincode data values are modeled as JSON.
Rich queries are more flexible and efficient against large indexed data stores, when you want to query the actual data
value content rather than the keys. CouchDB is a JSON document datastore rather than a pure key-value store therefore
enabling indexing of the contents of the documents in the database.
In order to leverage the benefits of CouchDB, namely content-based JSON queries,your data must be modeled in JSON
format. You must decide whether to use LevelDB or CouchDB before setting up your network. Switching a peer from
using LevelDB to CouchDB is not supported due to data compatibility issues. All peers on the network must use the
same database type. If you have a mix of JSON and binary data values, you can still use CouchDB, however the binary
values can only be queried based on key, key range, and composite key queries.
CouchDB runs as a separate database process alongside the peer, therefore there are additional considerations in terms
of setup, management, and operations. A docker image of CouchDB is available and we recommend that it be run on
the same server as the peer. You will need to setup one CouchDB container per peer and update each peer container
by changing the configuration found in core.yaml to point to the CouchDB container. The core.yaml file must
be located in the directory specified by the environment variable FABRIC_CFG_PATH:
• For docker deployments, core.yaml is pre-configured and located in the peer container FABRIC_CFG_PATH
folder. However when using docker environments, you typically pass environment variables by editing the
docker-compose-couch.yaml to override the core.yaml
• For native binary deployments, core.yaml is included with the release artifact distribution.
Edit the stateDatabase section of core.yaml. Specify CouchDB as the stateDatabase and fill in the
associated couchDBConfig properties. For more details on configuring CouchDB to work with fabric, refer here.
To view an example of a core.yaml file configured for CouchDB, examine the BYFN docker-compose-couch.
yaml in the HyperLedger/fabric-samples/first-network directory.
Note: Rich queries that do not have an index will work but may throw a warning in the CouchDB log that the index
was not found. However, if a rich query includes a sort specification, then an index on that field is required; otherwise,
the query will fail and an error will be thrown.
To demonstrate building an index, we will use the data from the Marbles sample. In this example, the Marbles data
structure is defined as:
In this structure, the attributes (docType, name, color, size, owner) define the ledger data associated with the
asset. The attribute docType is a pattern used in the chaincode to differentiate different data types that may need to
be queried separately. When using CouchDB, it recommended to include this docType attribute to distinguish each
type of document in the chaincode namespace. (Each chaincode is represented as its own CouchDB database, that is,
each chaincode has its own namespace for keys.)
With respect to the Marbles data structure, docType is used to identify that this document/asset is a marble asset.
Potentially there could be other documents/assets in the chaincode database. The documents in the database are
searchable against all of these attribute values.
When defining an index for use in chaincode queries, each one must be defined in its own text file with the extension
*.json and the index definition must be formatted in the CouchDB index JSON format.
To define an index, three pieces of information are required:
• fields: these are the frequently queried fields
• name: name of the index
• type: always json in this context
For example, a simple index named foo-index for a field named foo.
{
"index": {
"fields": ["foo"]
},
"name" : "foo-index",
"type" : "json"
}
Optionally the design document attribute ddoc can be specified on the index definition. A design document is
CouchDB construct designed to contain indexes. Indexes can be grouped into design documents for efficiency but
CouchDB recommends one index per design document.
Tip: When defining an index it is a good practice to include the ddoc attribute and value along with the index name.
It is important to include this attribute to ensure that you can update the index later if needed. Also it gives you the
ability to explicitly specify which index to use on a query.
Here is another example of an index definition from the Marbles sample with the index name indexOwner using
multiple fields docType and owner and includes the ddoc attribute:
{
"index":{
"fields":["docType","owner"] // Names of the fields to be queried
},
"ddoc":"indexOwnerDoc", // (optional) Name of the design document in which the
˓→index will be created.
"name":"indexOwner",
"type":"json"
}
In the example above, if the design document indexOwnerDoc does not already exist, it is automatically created
when the index is deployed. An index can be constructed with one or more attributes specified in the list of fields
and any combination of attributes can be specified. An attribute can exist in multiple indexes for the same docType.
In the following example, index1 only includes the attribute owner, index2 includes the attributes owner and
color and index3 includes the attributes owner, color and size. Also, notice each index definition has its
own ddoc value, following the CouchDB recommended practice.
{
"index":{
"fields":["owner"] // Names of the fields to be queried
},
"ddoc":"index1Doc", // (optional) Name of the design document in which the index
˓→will be created.
"name":"index1",
"type":"json"
}
{
"index":{
"fields":["owner", "color"] // Names of the fields to be queried
},
"ddoc":"index2Doc", // (optional) Name of the design document in which the index
˓→will be created.
"name":"index2",
"type":"json"
}
{
"index":{
"fields":["owner", "color", "size"] // Names of the fields to be queried
},
"ddoc":"index3Doc", // (optional) Name of the design document in which the index
˓→will be created.
"name":"index3",
"type":"json"
}
In general, you should model index fields to match the fields that will be used in query filters and sorts. For more
details on building an index in JSON format refer to the CouchDB documentation.
A final word on indexing, Fabric takes care of indexing the documents in the database using a pattern called index
warming. CouchDB does not typically index new or updated documents until the next query. Fabric ensures that
indexes stay ‘warm’ by requesting an index update after every block of data is committed. This ensures queries are
fast because they do not have to index documents before running the query. This process keeps the index current and
refreshed every time new records are added to the state database.
Once you finalize an index, it is ready to be packaged with your chaincode for deployment by being placed alongside
it in the appropriate metadata folder.
If your chaincode installation and instantiation uses the Hyperledger Fabric Node SDK, the JSON index files can be
located in any folder as long as it conforms to this directory structure. During the chaincode installation using the
client.installChaincode() API, include the attribute (metadataPath) in the installation request. The value of the
metadataPath is a string representing the absolute path to the directory structure containing the JSON index file(s).
Alternatively, if you are using the peer-commands to install and instantiate the chaincode, then the JSON index files
must be located under the path META-INF/statedb/couchdb/indexes which is located inside the directory
where the chaincode resides.
The Marbles sample below illustrates how the index is packaged with the chaincode which will be installed using the
peer commands.
Try it yourself
Before installing and instantiating the marbles chaincode, we need to start up the BYFN network. For
the sake of this tutorial, we want to operate from a known initial state. The following command will kill
any active or stale docker containers and remove previously generated artifacts. Therefore let’s run the
following command to clean up any previous environments:
cd fabric-samples/first-network
./byfn.sh down
Now start up the BYFN network with CouchDB by running the following command:
This will create a simple Fabric network consisting of a single channel named mychannel with two orga-
nizations (each maintaining two peer nodes) and an ordering service while using CouchDB as the state
database.
Client applications interact with the blockchain ledger through chaincode. As such we need to install the chaincode on
every peer that will execute and endorse our transactions and instantiate the chaincode on the channel. In the previous
section, we demonstrated how to package the chaincode so they should be ready for deployment.
Chaincode is installed onto a peer and then instantiated onto the channel using peer-commands.
1. Use the peer chaincode install command to install the Marbles chaincode on a peer.
Try it yourself
Assuming you have started the BYFN network, navigate into the CLI container using the command:
Use the following command to install the Marbles chaincode from the git repository onto a peer in your
BYFN network. The CLI container defaults to using peer0 of org1:
2. Issue the peer chaincode instantiate command to instantiate the chaincode on a channel.
Try it yourself
To instantiate the Marbles sample on the BYFN channel mychannel run the following command:
export CHANNEL_NAME=mychannel
peer chaincode instantiate -o orderer.example.com:7050 --tls --cafile /opt/
˓→gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/
˓→example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-
˓→'Org0MSP.peer','Org1MSP.peer')"
Indexes will be deployed to each peer’s CouchDB state database once the chaincode is both installed on the peer and
instantiated on the channel. You can verify that the CouchDB index was created successfully by examining the peer
log in the Docker container.
Try it yourself
To view the logs in the peer docker container, open a new Terminal window and run the following com-
mand to grep for message confirmation that the index was created.
˓→indexOwnerDoc]
Note: If Marbles was not installed on the BYFN peer peer0.org1.example.com, you may need
to replace it with the name of a different peer where Marbles was installed.
Now that the index has been defined in the JSON file and deployed alongside the chaincode, chaincode functions can
execute JSON queries against the CouchDB state database, and thereby peer commands can invoke the chaincode
functions.
Specifying an index name on a query is optional. If not specified, and an index already exists for the fields being
queried, the existing index will be automatically used.
Tip: It is a good practice to explicitly include an index name on a query using the use_index keyword. Without it,
CouchDB may pick a less optimal index. Also CouchDB may not use an index at all and you may not realize it, at the
low volumes during testing. Only upon higher volumes you may realize slow performance because CouchDB is not
using an index and you assumed it was.
You can perform complex rich queries against the chaincode data values using the CouchDB JSON query language
within chaincode. As we explored above, the marbles02 sample chaincode includes an index and rich queries are
defined in the functions - queryMarbles and queryMarblesByOwner:
• queryMarbles –
Example of an ad hoc rich query. This is a query where a (selector) string can be passed into the
function. This query would be useful to client applications that need to dynamically build their own
selectors at runtime. For more information on selectors refer to CouchDB selector syntax.
• queryMarblesByOwner –
Example of a parameterized query where the query logic is baked into the chaincode. In this case
the function accepts a single argument, the marble owner. It then queries the state database for JSON
documents matching the docType of “marble” and the owner id using the JSON query syntax.
In absence of a client application to test rich queries defined in chaincode, peer commands can be used. Peer commands
run from the command line inside the docker container. We will customize the peer chaincode query command to use
the Marbles index indexOwner and query for all marbles owned by “tom” using the queryMarbles function.
Try it yourself
Before querying the database, we should add some data. Run the following command in the peer container
to create a marble owned by “tom”:
˓→com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
˓→","tom"]}'
After an index has been deployed during chaincode instantiation, it will automatically be utilized by
chaincode queries. CouchDB can determine which index to use based on the fields being queried. If an
index exists for the query criteria it will be used. However the recommended approach is to specify the
use_index keyword on the query. The peer command below is an example of how to specify the index
explicitly in the selector syntax by including the use_index keyword:
˓→":[\"_design/indexOwnerDoc\", \"indexOwner\"]}"]}'
Delving into the query command above, there are three arguments of interest:
• queryMarbles
Name of the function in the Marbles chaincode. Notice a shim shim.ChaincodeStubInterface
is used to access and modify the ledger. The getQueryResultForQueryString() passes the
queryString to the shim API getQueryResult().
// 0
// "queryString"
if len(args) < 1 {
return shim.Error("Incorrect number of arguments. Expecting 1")
}
queryString := args[0]
• {"selector":{"docType":"marble","owner":"tom"}
This is an example of an ad hoc selector string which finds all documents of type marble where the
owner attribute has a value of tom.
• "use_index":["_design/indexOwnerDoc", "indexOwner"]
Specifies both the design doc name indexOwnerDoc and index name indexOwner. In this ex-
ample the selector query explicitly includes the index name, specified by using the use_index
keyword. Recalling the index definition above Create an index, it contains a design doc,
"ddoc":"indexOwnerDoc". With CouchDB, if you plan to explicitly include the index name on
the query, then the index definition must include the ddoc value, so it can be referenced with the
use_index keyword.
The query runs successfully and the index is leveraged with the following results:
When large result sets are returned by CouchDB queries, a set of APIs is available which can be called by chaincode to
paginate the list of results. Pagination provides a mechanism to partition the result set by specifying a pagesize and
a start point – a bookmark which indicates where to begin the result set. The client application iteratively invokes
the chaincode that executes the query until no more results are returned. For more information refer to this topic on
pagination with CouchDB.
We will use the Marbles sample function queryMarblesWithPagination to demonstrate how pagination can
be implemented in chaincode and the client application.
• queryMarblesWithPagination –
Example of an ad hoc rich query with pagination. This is a query where a (selector) string can be
passed into the function similar to the above example. In this case, a pageSize is also included
with the query as well as a bookmark.
In order to demonstrate pagination, more data is required. This example assumes that you have already added marble1
from above. Run the following commands in the peer container to create four more marbles owned by “tom”, to create
a total of five marbles owned by “tom”:
Try it yourself
peer chaincode invoke -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/
˓→github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/
˓→orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n
˓→marbles -c '{"Args":["initMarble","marble2","yellow","35","tom"]}'
˓→orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n
˓→marbles -c '{"Args":["initMarble","marble3","green","20","tom"]}'
˓→orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n
˓→marbles -c '{"Args":["initMarble","marble4","purple","20","tom"]}'
˓→orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n
˓→marbles -c '{"Args":["initMarble","marble5","blue","40","tom"]}'
In addition to the arguments for the query in the previous example, queryMarblesWithPagination adds pagesize
and bookmark. PageSize specifies the number of records to return per query. The bookmark is an “anchor”
telling couchDB where to begin the page. (Each page of results returns a unique bookmark.)
• queryMarblesWithPagination
Name of the function in the Marbles chaincode. Notice a shim shim.
ChaincodeStubInterface is used to access and modify the ledger. The
getQueryResultForQueryStringWithPagination() passes the queryString along
with the pagesize and bookmark to the shim API GetQueryResultWithPagination().
func (t *SimpleChaincode) queryMarblesWithPagination(stub shim.ChaincodeStubInterface,
˓→ args []string) pb.Response {
// 0
// "queryString"
if len(args) < 3 {
return shim.Error("Incorrect number of arguments. Expecting 3")
}
queryString := args[0]
//return type of ParseInt is int64
pageSize, err := strconv.ParseInt(args[1], 10, 32)
if err != nil {
return shim.Error(err.Error())
}
bookmark := args[2]
(continues on next page)
The following example is a peer command which calls queryMarblesWithPagination with a pageSize of 3 and no
bookmark specified.
Tip: When no bookmark is specified, the query starts with the “first” page of records.
Try it yourself
// Rich Query with index name explicitly specified and a page size of 3:
peer chaincode query -C $CHANNEL_NAME -n marbles -c '{"Args":[
˓→"queryMarblesWithPagination", "{\"selector\":{\"docType\":\"marble\",\"owner\":\
The following response is received (carriage returns added for clarity), three of the five marbles are returned because
the pagsize was set to 3:
[{"Key":"marble1", "Record":{"color":"blue","docType":"marble","name":"marble1","owner
˓→":"tom","size":35}},
{"Key":"marble2", "Record":{"color":"yellow","docType":"marble","name":"marble2",
˓→"owner":"tom","size":35}},
{"Key":"marble3", "Record":{"color":"green","docType":"marble","name":"marble3",
˓→"owner":"tom","size":20}}]
[{"ResponseMetadata":{"RecordsCount":"3",
"Bookmark":
˓→"g1AAAABLeJzLYWBgYMpgSmHgKy5JLCrJTq2MT8lPzkzJBYqz5yYWJeWkGoOkOWDSOSANIFk2iCyIyVySn5uVBQAGEhRz
˓→"}}]
Note: Bookmarks are uniquely generated by CouchDB for each query and represent a placeholder in the result set.
Pass the returned bookmark on the subsequent iteration of the query to retrieve the next set of results.
The following is a peer command to call queryMarblesWithPagination with a pageSize of 3. Notice this time, the
query includes the bookmark returned from the previous query.
Try it yourself
˓→"g1AAAABLeJzLYWBgYMpgSmHgKy5JLCrJTq2MT8lPzkzJBYqz5yYWJeWkGoOkOWDSOSANIFk2iCyIyVySn5uVBQAGEhRz
˓→"]}'
The following response is received (carriage returns added for clarity). The last two records are retrieved:
[{"Key":"marble4", "Record":{"color":"purple","docType":"marble","name":"marble4",
˓→"owner":"tom","size":20}},
{"Key":"marble5", "Record":{"color":"blue","docType":"marble","name":"marble5","owner
˓→":"tom","size":40}}]
[{"ResponseMetadata":{"RecordsCount":"2",
"Bookmark":
˓→"g1AAAABLeJzLYWBgYMpgSmHgKy5JLCrJTq2MT8lPzkzJBYqz5yYWJeWkmoKkOWDSOSANIFk2iCyIyVySn5uVBQAGYhR1
˓→"}}]
The final command is a peer command to call queryMarblesWithPagination with a pageSize of 3 and with the book-
mark from the previous query.
Try it yourself
˓→"g1AAAABLeJzLYWBgYMpgSmHgKy5JLCrJTq2MT8lPzkzJBYqz5yYWJeWkmoKkOWDSOSANIFk2iCyIyVySn5uVBQAGYhR1
˓→"]}'
The following response is received (carriage returns added for clarity). No records are returned, indicating that all
pages have been retrieved:
[]
[{"ResponseMetadata":{"RecordsCount":"0",
"Bookmark":
˓→"g1AAAABLeJzLYWBgYMpgSmHgKy5JLCrJTq2MT8lPzkzJBYqz5yYWJeWkmoKkOWDSOSANIFk2iCyIyVySn5uVBQAGYhR1
˓→"}}]
For an example of how a client application can iterate over the result sets using pagination, search for the
getQueryResultForQueryStringWithPagination function in the Marbles sample.
It may be necessary to update an index over time. The same index may exist in subsequent versions of the chaincode
that gets installed. In order for an index to be updated, the original index definition must have included the design
document ddoc attribute and an index name. To update an index definition, use the same index name but alter the
index definition. Simply edit the index JSON file and add or remove fields from the index. Fabric only supports the
index type JSON, changing the index type is not supported. The updated index definition gets redeployed to the peer’s
state database when the chaincode is installed and instantiated. Changes to the index name or ddoc attributes will
result in a new index being created and the original index remains unchanged in CouchDB until it is removed.
Note: If the state database has a significant volume of data, it will take some time for the index to be re-built, during
which time chaincode invokes that issue queries may fail or timeout.
If you have access to your peer’s CouchDB state database in a development environment, you can iteratively test
various indexes in support of your chaincode queries. Any changes to chaincode though would require redeployment.
Use the CouchDB Fauxton interface or a command line curl utility to create and update indexes.
Note: The Fauxton interface is a web UI for the creation, update, and deployment of indexes to CouchDB. If you
want to try out this interface, there is an example of the format of the Fauxton version of the index in Marbles sample.
If you have deployed the BYFN network with CouchDB, the Fauxton interface can be loaded by opening a browser
and navigating to http://localhost:5984/_utils.
Alternatively, if you prefer not use the Fauxton UI, the following is an example of a curl command which can be used
to create the index on the database mychannel_marbles:
// Index for docType, owner. // Example curl command line to define index in the CouchDB chan-
nel_chaincode database
Note: If you are using BYFN configured with CouchDB, replace hostname:port with localhost:5984.
Index deletion is not managed by Fabric tooling. If you need to delete an index, manually issue a curl command
against the database or delete it using the Fauxton interface.
The format of the curl command to delete an index would be:
To delete the index used in this tutorial, the curl command would be:
7.12 Videos
Operations Guides
At a high level, upgrading a Fabric network from v1.3 to v1.4 can be performed by following these steps:
• Upgrade the binaries for the ordering service, the Fabric CA, and the peers. These upgrades may be done in
parallel.
• Upgrade client SDKs.
• (Optional) Upgrade the Kafka cluster.
To help understand this process, we’ve created the Upgrading Your Network Components tutorial that will take you
through most of the major upgrade steps, including upgrading peers and orderers. We’ve included both a script as well
as the individual steps to achieve these upgrades.
Because our tutorial leverages the Building Your First Network (BYFN) sample, it has certain limitations (it does not
use Fabric CA, for example). Therefore we have included a section at the end of the tutorial that will show how to
upgrade your CA, Kafka clusters, CouchDB, Zookeeper, vendored chaincode shims, and Node SDK clients.
Because there are no new Capability Requirements in v1.4, the upgrade process does not require any channel config-
uration transactions.
Channel configurations contain all of the information relevant to the administration of a channel. Most importantly, the
channel configuration specifies which organizations are members of channel, but it also includes other channel-wide
configuration information such as channel access policies and block batch sizes.
This configuration is stored on the ledger in a block, and is therefore known as a configuration (config) block. Con-
figuration blocks contain a single configuration. The first of these blocks is known as the “genesis block” and contains
the initial configuration required to bootstrap a channel. Each time the configuration of a channel changes it is done
251
hyperledger-fabricdocs Documentation, Release master
through a new configuration block, with the latest configuration block representing the current channel configuration.
Orderers and peers keep the current channel configuration in memory to facilitate all channel operations such as cutting
a new block and validating block transactions.
Because configurations are stored in blocks, updating a config happens through a process called a “configuration
transaction” (even though the process is a little different from a normal transaction). Updating a config is a process of
pulling the config, translating into a format that humans can read, modifying it and then submitting it for approval.
For a more in-depth look at the process for pulling a config and translating it into JSON, check out Adding an Org
to a Channel. In this doc, we’ll be focusing on the different ways you can edit a config and the process for getting it
signed.
Channels are highly configurable, but not infinitely so. Different configuration elements have different modification
policies (which specify the group of identities required to sign the config update).
To see the scope of what’s possible to change it’s important to look at a config in JSON format. The Adding an Org
to a Channel tutorial generates one, so if you’ve gone through that doc you can simply refer to it. For those who have
not, we’ll provide one here (for ease of readability, it might be helpful to put this config into a viewer that supports
JSON folding, like atom or Visual Studio).
Click here to see the config
{
"channel_group": {
"groups": {
"Application": {
"groups": {
"Org1MSP": {
"mod_policy": "Admins",
"policies": {
"Admins": {
"mod_policy": "Admins",
"policy": {
"type": 1,
"value": {
"identities": [
{
"principal": {
"msp_identifier": "Org1MSP",
"role": "ADMIN"
},
"principal_classification": "ROLE"
}
],
"rule": {
"n_out_of": {
"n": 1,
"rules": [
{
"signed_by": 0
}
]
}
},
"version": 0
(continues on next page)
˓→ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNHRENDQWIrZ0F3SUJBZ0lRSWlyVmg3NVcwWmh0UjEzdmltdmliakFLQm
˓→ "
],
"crypto_config": {
"identity_identifier_hash_function": "SHA256",
"signature_hash_family": "SHA2"
},
"name": "Org1MSP",
"root_certs": [
˓→ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNRekNDQWVxZ0F3SUJBZ0lSQU03ZVdTaVM4V3VVM2haMU9tR255eXd3Q2
˓→ "
],
"tls_root_certs": [
˓→ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNTVENDQWZDZ0F3SUJBZ0lSQUtsNEFQWmV6dWt0Nk8wYjRyYjY5Y0F3Q2
˓→ "
]
},
"type": 0
},
"version": "0"
}
},
"version": "1"
},
"Org2MSP": {
"mod_policy": "Admins",
"policies": {
"Admins": {
"mod_policy": "Admins",
"policy": {
(continues on next page)
˓→ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNHVENDQWNDZ0F3SUJBZ0lSQU5Pb1lIbk9seU94dTJxZFBteStyV293Q2
˓→ "
],
"crypto_config": {
"identity_identifier_hash_function": "SHA256",
"signature_hash_family": "SHA2"
},
"name": "Org2MSP",
"root_certs": [
˓→ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNTakNDQWZDZ0F3SUJBZ0lSQU9JNmRWUWMraHBZdkdMSlFQM1YwQU13Q2
˓→ "
]
},
"type": 0
},
"version": "0"
}
},
"version": "1"
},
"Org3MSP": {
"groups": {},
"mod_policy": "Admins",
"policies": {
"Admins": {
"mod_policy": "Admins",
"policy": {
"type": 1,
"value": {
"identities": [
{
"principal": {
"msp_identifier": "Org3MSP",
"role": "ADMIN"
},
"principal_classification": "ROLE"
}
],
"rule": {
"n_out_of": {
"n": 1,
"rules": [
{
"signed_by": 0
}
]
}
},
"version": 0
}
},
"version": "0"
},
"Readers": {
"mod_policy": "Admins",
"policy": {
"type": 1,
"value": {
"identities": [
{
"principal": {
"msp_identifier": "Org3MSP",
"role": "MEMBER"
(continues on next page)
˓→ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNHRENDQWIrZ0F3SUJBZ0lRQUlSNWN4U0hpVm1kSm9uY3FJVUxXekFLQm
˓→ " (continues on next page)
˓→ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNRakNDQWVtZ0F3SUJBZ0lRUkN1U2Y0RVJNaDdHQW1ydTFIQ2FZREFLQm
˓→ "
],
"tls_root_certs": [
˓→ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNTVENDQWZDZ0F3SUJBZ0lSQU9xc2JQQzFOVHJzclEvUUNpalh6K0F3Q2
˓→ "
]
},
"type": 0
},
"version": "0"
}
},
"version": "0"
}
},
"mod_policy": "Admins",
"policies": {
"Admins": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "MAJORITY",
"sub_policy": "Admins"
}
},
"version": "0"
},
"Readers": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "ANY",
"sub_policy": "Readers"
}
},
"version": "0"
},
"Writers": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "ANY",
"sub_policy": "Writers"
}
(continues on next page)
˓→ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNDakNDQWJDZ0F3SUJBZ0lRSFNTTnIyMWRLTTB6THZ0dEdoQnpMVEFLQm
˓→ "
],
"crypto_config": {
"identity_identifier_hash_function": "SHA256",
"signature_hash_family": "SHA2"
},
"name": "OrdererMSP",
(continues on next page)
˓→ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNMakNDQWRXZ0F3SUJBZ0lRY2cxUVZkVmU2Skd6YVU1cmxjcW4vakFLQm
˓→ "
],
"tls_root_certs": [
˓→ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNORENDQWR1Z0F3SUJBZ0lRYWJ5SUl6cldtUFNzSjJacisvRVpXVEFLQm
˓→ "
]
},
"type": 0
},
"version": "0"
}
},
"version": "0"
}
},
"mod_policy": "Admins",
"policies": {
"Admins": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "MAJORITY",
"sub_policy": "Admins"
}
},
"version": "0"
},
"BlockValidation": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "ANY",
"sub_policy": "Writers"
}
},
"version": "0"
},
"Readers": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "ANY",
"sub_policy": "Readers"
}
},
"version": "0"
},
"Writers": {
"mod_policy": "Admins",
"policy": {
(continues on next page)
A config might look intimidating in this form, but once you study it you’ll see that it has a logical structure.
Beyond the definitions of the policies – defining who can do certain things at the channel level, and who has the
permission to change who can change the config – channels also have other kinds of features that can be modified
using a config update. Adding an Org to a Channel takes you through one of the most important – adding an org to a
channel. Some other things that are possible to change with a config update include:
• Batch Size. These parameters dictate the number and size of transactions in a block. No block will appear larger
than absolute_max_bytes large or with more than max_message_count transactions inside the block.
If it is possible to construct a block under preferred_max_bytes, then a block will be cut prematurely,
and transactions larger than this size will appear in their own block.
{
"absolute_max_bytes": 102760448,
"max_message_count": 10,
"preferred_max_bytes": 524288
}
• Batch Timeout. The amount of time to wait after the first transaction arrives for additional transactions before
cutting a block. Decreasing this value will improve latency, but decreasing it too much may decrease throughput
by not allowing the block to fill to its maximum capacity.
{ "timeout": "2s" }
• Channel Restrictions. The total number of channels the orderer is willing to allocate may be spec-
ified as max_count. This is primarily useful in pre-production environments with weak consortium
ChannelCreation policies.
{
"max_count":1000
}
• Channel Creation Policy. Defines the policy value which will be set as the mod_policy for the Application
group of new channels for the consortium it is defined in. The signature set attached to the channel creation
request will be checked against the instantiation of this policy in the new channel to ensure that the channel
creation is authorized. Note that this config value is only set in the orderer system channel.
{
"type": 3,
"value": {
"rule": "ANY",
"sub_policy": "Admins"
}
}
• Kafka brokers. When ConsensusType is set to kafka, the brokers list enumerates some subset (or
preferably all) of the Kafka brokers for the orderer to initially connect to at startup. Note that it is not possible
to change your consensus type after it has been established (during the bootstrapping of the genesis block).
{
"brokers": [
"kafka0:9092",
"kafka1:9092",
"kafka2:9092",
"kafka3:9092"
]
}
• Anchor Peers Definition. Defines the location of the anchor peers for each Org.
{
"host": "peer0.org2.example.com",
"port": 7051
}
• Hashing Structure. The block data is an array of byte arrays. The hash of the block data is computed as
a Merkle tree. This value specifies the width of that Merkle tree. For the time being, this value is fixed to
4294967295 which corresponds to a simple flat hash of the concatenation of the block data bytes.
{ "width": 4294967295 }
• Hashing Algorithm. The algorithm used for computing the hash values encoded into the blocks of the
blockchain. In particular, this affects the data hash, and the previous block hash fields of the block. Note,
this field currently only has one valid value (SHA256) and should not be changed.
{ "name": "SHA256" }
• Block Validation. This policy specifies the signature requirements for a block to be considered valid. By
default, it requires a signature from some member of the ordering org.
{
"type": 3,
"value": {
"rule": "ANY",
"sub_policy": "Writers"
}
}
• Orderer Address. A list of addresses where clients may invoke the orderer Broadcast and Deliver func-
tions. The peer randomly chooses among these addresses and fails over between them for retrieving blocks.
{
"addresses": [
"orderer.example.com:7050"
]
}
Just as we add an Org by adding their artifacts and MSP information, you can remove them by reversing the process.
Note that once the consensus type has been defined and the network has been bootstrapped, it is not possible to change
it through a configuration update.
There is another important channel configuration (especially for v1.1) known as Capability Requirements. It has its
own doc that can be found here.
Let’s say you want to edit the block batch size for the channel (because this is a single numeric field, it’s one of the
easiest changes to make). First to make referencing the JSON path easy, we define it as an environment variable.
To establish this, take a look at your config, find what you’re looking for, and back track the path.
If you find batch size, for example, you’ll see that it’s a value of the Orderer. Orderer can be found
under groups, which is under channel_group. The batch size value has a parameter under value of
max_message_count.
Which would make the path this:
export MAXBATCHSIZEPATH=".channel_group.groups.Orderer.values.BatchSize.value.max_
˓→message_count"
jq "$MAXBATCHSIZEPATH" config.json
Once you’ve modified the JSON, it’s ready to be converted and submitted. The scripts and steps in Adding an Org to
a Channel will take you through the process for converting the JSON, so let’s look at the process of submitting it.
Once you’ve successfully generated the protobuf file, it’s time to get it signed. To do this, you need to know the
relevant policy for whatever it is you’re trying to change.
By default, editing the configuration of:
• A particular org (for example, changing anchor peers) requires only the admin signature of that org.
• The application (like who the member orgs are) requires a majority of the application organizations’ admins to
sign.
• The orderer requires a majority of the ordering organizations’ admins (of which there are by default only 1).
• The top level channel group requires both the agreement of a majority of application organization admins
and orderer organization admins.
If you have made changes to the default policies in the channel, you’ll need to compute your signature requirements
accordingly.
Note: you may be able to script the signature collection, dependent on your application. In general, you may always
collect more signatures than are required.
The actual process of getting these signatures will depend on how you’ve set up your system, but there are two main
implementations. Currently, the Fabric command line defaults to a “pass it along” system. That is, the Admin of the
Org proposing a config update sends the update to someone else (another Admin, typically) who needs to sign it. This
Admin signs it (or doesn’t) and passes it along to the next Admin, and so on, until there are enough signatures for the
config to be submitted.
This has the virtue of simplicity – when there are enough signatures, the last Admin can simply submit the config
transaction (in Fabric, the peer channel update command includes a signature by default). However, this
process will only be practical in smaller channels, since the “pass it along” method can be time consuming.
The other option is to submit the update to every Admin on a channel and wait for enough signatures to come back.
These signatures can then be stitched together and submitted. This makes life a bit more difficult for the Admin who
created the config update (forcing them to deal with a file per signer) but is the recommended workflow for users which
are developing Fabric management applications.
Once the config has been added to the ledger, it will be a best practice to pull it and convert it to JSON to check to
make sure everything was added correctly. This will also serve as a useful copy of the latest config.
The document serves to provide details on the setup and best practices for MSPs.
Membership Service Provider (MSP) is a component that aims to offer an abstraction of a membership operation
architecture.
In particular, MSP abstracts away all cryptographic mechanisms and protocols behind issuing and validating certifi-
cates, and user authentication. An MSP may define their own notion of identity, and the rules by which those identities
are governed (identity validation) and authenticated (signature generation and verification).
A Hyperledger Fabric blockchain network can be governed by one or more MSPs. This provides modularity of
membership operations, and interoperability across different membership standards and architectures.
In the rest of this document we elaborate on the setup of the MSP implementation supported by Hyperledger Fabric,
and discuss best practices concerning its use.
To setup an instance of the MSP, its configuration needs to be specified locally at each peer and orderer (to enable peer,
and orderer signing), and on the channels to enable peer, orderer, client identity validation, and respective signature
verification (authentication) by and for all channel members.
Firstly, for each MSP a name needs to be specified in order to reference that MSP in the network (e.g. msp1, org2,
and org3.divA). This is the name under which membership rules of an MSP representing a consortium, organization
or organization division is to be referenced in a channel. This is also referred to as the MSP Identifier or MSP ID. MSP
Identifiers are required to be unique per MSP instance. For example, shall two MSP instances with the same identifier
be detected at the system channel genesis, orderer setup will fail.
In the case of default implementation of MSP, a set of parameters need to be specified to allow for identity (certificate)
validation and signature verification. These parameters are deduced by RFC5280, and include:
• A list of self-signed (X.509) certificates to constitute the root of trust
• A list of X.509 certificates to represent intermediate CAs this provider considers for certificate validation; these
certificates ought to be certified by exactly one of the certificates in the root of trust; intermediate CAs are
optional parameters
• A list of X.509 certificates with a verifiable certificate path to exactly one of the certificates of the root of trust
to represent the administrators of this MSP; owners of these certificates are authorized to request changes to this
MSP configuration (e.g. root CAs, intermediate CAs)
• A list of Organizational Units that valid members of this MSP should include in their X.509 certificate; this is
an optional configuration parameter, used when, e.g., multiple organizations leverage the same root of trust, and
intermediate CAs, and have reserved an OU field for their members
• A list of certificate revocation lists (CRLs) each corresponding to exactly one of the listed (intermediate or root)
MSP Certificate Authorities; this is an optional parameter
• A list of self-signed (X.509) certificates to constitute the TLS root of trust for TLS certificate.
• A list of X.509 certificates to represent intermediate TLS CAs this provider considers; these certificates ought to
be certified by exactly one of the certificates in the TLS root of trust; intermediate CAs are optional parameters.
Valid identities for this MSP instance are required to satisfy the following conditions:
• They are in the form of X.509 certificates with a verifiable certificate path to exactly one of the root of trust
certificates;
• They are not included in any CRL;
• And they list one or more of the Organizational Units of the MSP configuration in the OU field of their X.509
certificate structure.
For more information on the validity of identities in the current MSP implementation, we refer the reader to msp-
identity-validity-rules.
In addition to verification related parameters, for the MSP to enable the node on which it is instantiated to sign or
authenticate, one needs to specify:
• The signing key used for signing by the node (currently only ECDSA keys are supported), and
• The node’s X.509 certificate, that is a valid identity under the verification parameters of this MSP.
It is important to note that MSP identities never expire; they can only be revoked by adding them to the appropriate
CRLs. Additionally, there is currently no support for enforcing revocation of TLS certificates.
To generate X.509 certificates to feed its MSP configuration, the application can use Openssl. We emphasize that in
Hyperledger Fabric there is no support for certificates including RSA keys.
Alternatively one can use cryptogen tool, whose operation is explained in Getting Started.
Hyperledger Fabric CA can also be used to generate the keys and certificates needed to configure an MSP.
To set up a local MSP (for either a peer or an orderer), the administrator should create a folder (e.g. $MY_PATH/
mspconfig) that contains six subfolders and a file:
1. a folder admincerts to include PEM files each corresponding to an administrator certificate
2. a folder cacerts to include PEM files each corresponding to a root CA’s certificate
3. (optional) a folder intermediatecerts to include PEM files each corresponding to an intermediate CA’s
certificate
4. (optional) a file config.yaml to configure the supported Organizational Units and identity classifications
(see respective sections below).
5. (optional) a folder crls to include the considered CRLs
6. a folder keystore to include a PEM file with the node’s signing key; we emphasise that currently RSA keys
are not supported
7. a folder signcerts to include a PEM file with the node’s X.509 certificate
8. (optional) a folder tlscacerts to include PEM files each corresponding to a TLS root CA’s certificate
9. (optional) a folder tlsintermediatecerts to include PEM files each corresponding to an intermediate
TLS CA’s certificate
In the configuration file of the node (core.yaml file for the peer, and orderer.yaml for the orderer), one needs to specify
the path to the mspconfig folder, and the MSP Identifier of the node’s MSP. The path to the mspconfig folder is
expected to be relative to FABRIC_CFG_PATH and is provided as the value of parameter mspConfigPath for
the peer, and LocalMSPDir for the orderer. The identifier of the node’s MSP is provided as a value of parameter
localMspId for the peer and LocalMSPID for the orderer. These variables can be overridden via the environment
using the CORE prefix for peer (e.g. CORE_PEER_LOCALMSPID) and the ORDERER prefix for the orderer (e.g.
ORDERER_GENERAL_LOCALMSPID). Notice that for the orderer setup, one needs to generate, and provide to the
orderer the genesis block of the system channel. The MSP configuration needs of this block are detailed in the next
section.
Reconfiguration of a “local” MSP is only possible manually, and requires that the peer or orderer process is restarted.
In subsequent releases we aim to offer online/dynamic reconfiguration (i.e. without requiring to stop the node by using
a node managed system chaincode).
In order to configure the list of Organizational Units that valid members of this MSP should include in their X.509
certificate, the config.yaml file needs to specify the organizational unit identifiers. Here is an example:
OrganizationalUnitIdentifiers:
- Certificate: "cacerts/cacert1.pem"
OrganizationalUnitIdentifier: "commercial"
- Certificate: "cacerts/cacert2.pem"
OrganizationalUnitIdentifier: "administrators"
The above example declares two organizational unit identifiers: commercial and administrators. An MSP identity
is valid if it carries at least one of these organizational unit identifiers. The Certificate field refers to the CA
or intermediate CA certificate path under which identities, having that specific OU, should be validated. The path is
relative to the MSP root folder and cannot be empty.
The default MSP implementation allows to further classify identities into clients and peers, based on the OUs of their
x509 certificates. An identity should be classified as a client if it submits transactions, queries peers, etc. An identity
should be classified as a peer if it endorses or commits transactions. In order to define clients and peers of a given
MSP, the config.yaml file needs to be set appropriately. Here is an example:
NodeOUs:
Enable: true
ClientOUIdentifier:
Certificate: "cacerts/cacert.pem"
OrganizationalUnitIdentifier: "client"
PeerOUIdentifier:
Certificate: "cacerts/cacert.pem"
OrganizationalUnitIdentifier: "peer"
As shown above, the NodeOUs.Enable is set to true, this enables the identify classification. Then, client (peer)
identifiers are defined by setting the following properties for the NodeOUs.ClientOUIdentifier (NodeOUs.
PeerOUIdentifier) key:
1. OrganizationalUnitIdentifier: Set this to the value that matches the OU that the x509 certificate of
a client (peer) should contain.
2. Certificate: Set this to the CA or intermediate CA under which client (peer) identities should be validated.
The field is relative to the MSP root folder. It can be empty, meaning that the identity’s x509 certificate can be
validated under any CA defined in the MSP configuration.
When the classification is enabled, MSP administrators need to be clients of that MSP, meaning that their x509 certifi-
cates need to carry the OU that identifies the clients. Notice also that, an identity can be either a client or a peer. The
two classifications are mutually exclusive. If an identity is neither a client nor a peer, the validation will fail.
Finally, notice that for upgraded environments the 1.1 channel capability needs to be enabled before identify classifi-
cation can be used.
At the genesis of the system, verification parameters of all the MSPs that appear in the network need to be specified,
and included in the system channel’s genesis block. Recall that MSP verification parameters consist of the MSP
identifier, the root of trust certificates, intermediate CA and admin certificates, as well as OU specifications and CRLs.
The system genesis block is provided to the orderers at their setup phase, and allows them to authenticate channel
creation requests. Orderers would reject the system genesis block, if the latter includes two MSPs with the same
identifier, and consequently the bootstrapping of the network would fail.
For application channels, the verification components of only the MSPs that govern a channel need to reside in the
channel’s genesis block. We emphasize that it is the responsibility of the application to ensure that correct MSP
configuration information is included in the genesis blocks (or the most recent configuration block) of a channel prior
to instructing one or more of their peers to join the channel.
When bootstrapping a channel with the help of the configtxgen tool, one can configure the channel MSPs by including
the verification parameters of MSP in the mspconfig folder, and setting that path in the relevant section in configtx.
yaml.
Reconfiguration of an MSP on the channel, including announcements of the certificate revocation lists associated to
the CAs of that MSP is achieved through the creation of a config_update object by the owner of one of the
administrator certificates of the MSP. The client application managed by the admin would then announce this update
to the channels in which this MSP appears.
In this section we elaborate on best practices for MSP configuration in commonly met scenarios.
1) Mapping between organizations/corporations and MSPs
We recommend that there is a one-to-one mapping between organizations and MSPs. If a different type of mapping is
chosen, the following needs to be to considered:
• One organization employing various MSPs. This corresponds to the case of an organization including a
variety of divisions each represented by its MSP, either for management independence reasons, or for privacy
reasons. In this case a peer can only be owned by a single MSP, and will not recognize peers with identities from
other MSPs as peers of the same organization. The implication of this is that peers may share through gossip
organization-scoped data with a set of peers that are members of the same subdivision, and NOT with the full
set of providers constituting the actual organization.
• Multiple organizations using a single MSP. This corresponds to a case of a consortium of organizations that are
governed by similar membership architecture. One needs to know here that peers would propagate organization-
scoped messages to the peers that have an identity under the same MSP regardless of whether they belong to
the same actual organization. This is a limitation of the granularity of MSP definition, and/or of the peer’s
configuration.
2) One organization has different divisions (say organizational units), to which it wants to grant access to
different channels.
Two ways to handle this:
• Define one MSP to accommodate membership for all organization’s members. Configuration of that MSP
would consist of a list of root CAs, intermediate CAs and admin certificates; and membership identities would
include the organizational unit (OU) a member belongs to. Policies can then be defined to capture members of
a specific OU, and these policies may constitute the read/write policies of a channel or endorsement policies of
a chaincode. A limitation of this approach is that gossip peers would consider peers with membership iden-
tities under their local MSP as members of the same organization, and would consequently gossip with them
organization-scoped data (e.g. their status).
• Defining one MSP to represent each division. This would involve specifying for each division, a set of
certificates for root CAs, intermediate CAs, and admin Certs, such that there is no overlapping certification path
across MSPs. This would mean that, for example, a different intermediate CA per subdivision is employed.
Here the disadvantage is the management of more than one MSPs instead of one, but this circumvents the
issue present in the previous approach. One could also define one MSP for each division by leveraging an OU
extension of the MSP configuration.
3) Separating clients from peers of the same organization.
In many cases it is required that the “type” of an identity is retrievable from the identity itself (e.g. it may be needed
that endorsements are guaranteed to have derived by peers, and not clients or nodes acting solely as orderers).
There is limited support for such requirements.
One way to allow for this separation is to create a separate intermediate CA for each node type - one for clients and
one for peers/orderers; and configure two different MSPs - one for clients and one for peers/orderers. Channels this
organization should be accessing would need to include both MSPs, while endorsement policies will leverage only the
MSP that refers to the peers. This would ultimately result in the organization being mapped to two MSP instances,
and would have certain consequences on the way peers and clients interact.
Gossip would not be drastically impacted as all peers of the same organization would still belong to one MSP. Peers
can restrict the execution of certain system chaincodes to local MSP based policies. For example, peers would only
execute “joinChannel” request if the request is signed by the admin of their local MSP who can only be a client (end-
user should be sitting at the origin of that request). We can go around this inconsistency if we accept that the only
clients to be members of a peer/orderer MSP would be the administrators of that MSP.
Another point to be considered with this approach is that peers authorize event registration requests based on mem-
bership of request originator within their local MSP. Clearly, since the originator of the request is a client, the request
originator is always deemed to belong to a different MSP than the requested peer and the peer would reject the request.
4) Admin and CA certificates.
It is important to set MSP admin certificates to be different than any of the certificates considered by the MSP for
root of trust, or intermediate CAs. This is a common (security) practice to separate the duties of management
of membership components from the issuing of new certificates, and/or validation of existing ones.
5) Blacklisting an intermediate CA.
As mentioned in previous sections, reconfiguration of an MSP is achieved by reconfiguration mechanisms (manual
reconfiguration for the local MSP instances, and via properly constructed config_update messages for MSP
instances of a channel). Clearly, there are two ways to ensure an intermediate CA considered in an MSP is no longer
considered for that MSP’s identity validation:
1. Reconfigure the MSP to no longer include the certificate of that intermediate CA in the list of trusted intermediate
CA certs. For the locally configured MSP, this would mean that the certificate of this CA is removed from the
intermediatecerts folder.
2. Reconfigure the MSP to include a CRL produced by the root of trust which denounces the mentioned interme-
diate CA’s certificate.
In the current MSP implementation we only support method (1) as it is simpler and does not require blacklisting the
no longer considered intermediate CA.
6) CAs and TLS CAs
MSP identities’ root CAs and MSP TLS certificates’ root CAs (and relative intermediate CAs) need to be declared in
different folders. This is to avoid confusion between different classes of certificates. It is not forbidden to reuse the
same CAs for both MSP identities and TLS certificates but best practices suggest to avoid this in production.
Shared configuration for a Hyperledger Fabric blockchain network is stored in a collection configuration transactions,
one per channel. Each configuration transaction is usually referred to by the shorter name configtx.
Channel configuration has the following important properties:
1. Versioned: All elements of the configuration have an associated version which is advanced with every modifi-
cation. Further, every committed configuration receives a sequence number.
2. Permissioned: Each element of the configuration has an associated policy which governs whether or not modi-
fication to that element is permitted. Anyone with a copy of the previous configtx (and no additional info) may
verify the validity of a new config based on these policies.
3. Hierarchical: A root configuration group contains sub-groups, and each group of the hierarchy has associated
values and policies. These policies can take advantage of the hierarchy to derive policies at one level from
policies of lower levels.
Configuration is stored as a transaction of type HeaderType_CONFIG in a block with no other transactions. These
blocks are referred to as Configuration Blocks, the first of which is referred to as the Genesis Block.
The proto structures for configuration are stored in fabric/protos/common/configtx.proto. The Enve-
lope of type HeaderType_CONFIG encodes a ConfigEnvelope message as the Payload data field. The
proto for ConfigEnvelope is defined as follows:
message ConfigEnvelope {
Config config = 1;
Envelope last_update = 2;
}
The last_update field is defined below in the Updates to configuration section, but is only necessary when
validating the configuration, not reading it. Instead, the currently committed configuration is stored in the config
field, containing a Config message.
message Config {
uint64 sequence = 1;
ConfigGroup channel_group = 2;
}
The sequence number is incremented by one for each committed configuration. The channel_group field is the
root group which contains the configuration. The ConfigGroup structure is recursively defined, and builds a tree of
groups, each of which contains values and policies. It is defined as follows:
message ConfigGroup {
uint64 version = 1;
map<string,ConfigGroup> groups = 2;
map<string,ConfigValue> values = 3;
map<string,ConfigPolicy> policies = 4;
string mod_policy = 5;
}
Because ConfigGroup is a recursive structure, it has hierarchical arrangement. The following example is expressed
for clarity in golang notation.
Each group defines a level in the config hierarchy, and each group has an associated set of values (indexed by string
key) and policies (also indexed by string key).
Values are defined by:
message ConfigValue {
uint64 version = 1;
bytes value = 2;
string mod_policy = 3;
}
message ConfigPolicy {
uint64 version = 1;
Policy policy = 2;
string mod_policy = 3;
}
Note that Values, Policies, and Groups all have a version and a mod_policy. The version of an ele-
ment is incremented each time that element is modified. The mod_policy is used to govern the required sig-
natures to modify that element. For Groups, modification is adding or removing elements to the Values, Poli-
cies, or Groups maps (or changing the mod_policy). For Values and Policies, modification is changing the
Value and Policy fields respectively (or changing the mod_policy). Each element’s mod_policy is eval-
uated in the context of the current level of the config. Consider the following example mod policies defined
at Channel.Groups["Application"] (Here, we use the golang map reference syntax, so Channel.
Groups["Application"].Policies["policy1"] refers to the base Channel group’s Application
group’s Policies map’s policy1 policy.)
• policy1 maps to Channel.Groups["Application"].Policies["policy1"]
• Org1/policy2 maps to Channel.Groups["Application"].Groups["Org1"].
Policies["policy2"]
• /Channel/policy3 maps to Channel.Policies["policy3"]
Note that if a mod_policy references a policy which does not exist, the item cannot be modified.
message ConfigUpdateEnvelope {
bytes config_update = 1;
repeated ConfigSignature signatures = 2;
}
The signatures field contains the set of signatures which authorizes the config update. Its message definition is:
message ConfigSignature {
bytes signature_header = 1;
bytes signature = 2;
}
The signature_header is as defined for standard transactions, while the signature is over the concatenation of
the signature_header bytes and the config_update bytes from the ConfigUpdateEnvelope message.
The ConfigUpdateEnvelope config_update bytes are a marshaled ConfigUpdate message which is de-
fined as follows:
message ConfigUpdate {
string channel_id = 1;
ConfigGroup read_set = 2;
ConfigGroup write_set = 3;
}
The channel_id is the channel ID the update is bound for, this is necessary to scope the signatures which support
this reconfiguration.
The read_set specifies a subset of the existing configuration, specified sparsely where only the version field is
set and no other fields must be populated. The particular ConfigValue value or ConfigPolicy policy fields
should never be set in the read_set. The ConfigGroup may have a subset of its map fields populated, so as to
reference an element deeper in the config tree. For instance, to include the Application group in the read_set,
its parent (the Channel group) must also be included in the read set, but, the Channel group does not need to
populate all of the keys, such as the Orderer group key, or any of the values or policies keys.
The write_set specifies the pieces of configuration which are modified. Because of the hierarchical nature of the
configuration, a write to an element deep in the hierarchy must contain the higher level elements in its write_set
as well. However, for any element in the write_set which is also specified in the read_set at the same version,
the element should be specified sparsely, just as in the read_set.
For example, given the configuration:
Channel: (version 0)
Orderer (version 0)
Application (version 3)
Org1 (version 2)
To submit a configuration update which modifies Org1, the read_set would be:
Channel: (version 0)
Application: (version 3)
Channel: (version 0)
Application: (version 3)
Org1 (version 3)
When the CONFIG_UPDATE is received, the orderer computes the resulting CONFIG by doing the following:
1. Verifies the channel_id and read_set. All elements in the read_set must exist at the given versions.
2. Computes the update set by collecting all elements in the write_set which do not appear at the same version
in the read_set.
3. Verifies that each element in the update set increments the version number of the element update by exactly 1.
4. Verifies that the signature set attached to the ConfigUpdateEnvelope satisfies the mod_policy for each
element in the update set.
5. Computes a new complete version of the config by applying the update set to the current config.
6. Writes the new config into a ConfigEnvelope which includes the CONFIG_UPDATE as the last_update
field and the new config encoded in the config field, along with the incremented sequence value.
7. Writes the new ConfigEnvelope into a Envelope of type CONFIG, and ultimately writes this as the sole
transaction in a new configuration block.
When the peer (or any other receiver for Deliver) receives this configuration block, it should verify that the config
was appropriately validated by applying the last_update message to the current config and verifying that the
orderer-computed config field contains the correct new configuration.
Any valid configuration is a subset of the following configuration. Here we use the notation peer.<MSG> to define
a ConfigValue whose value field is a marshaled proto message of name <MSG> defined in fabric/protos/
peer/configuration.proto. The notations common.<MSG>, msp.<MSG>, and orderer.<MSG> cor-
respond similarly, but with their messages defined in fabric/protos/common/configuration.proto,
fabric/protos/msp/mspconfig.proto, and fabric/protos/orderer/configuration.proto
respectively.
Note, that the keys {{org_name}} and {{consortium_name}} represent arbitrary names, and indicate an
element which may be repeated with different names.
&ConfigGroup{
Groups: map<string, *ConfigGroup> {
"Application":&ConfigGroup{
Groups:map<String, *ConfigGroup> {
{{org_name}}:&ConfigGroup{
Values:map<string, *ConfigValue>{
"MSP":msp.MSPConfig,
"AnchorPeers":peer.AnchorPeers,
},
},
},
},
"Orderer":&ConfigGroup{
Groups:map<String, *ConfigGroup> {
{{org_name}}:&ConfigGroup{
Values:map<string, *ConfigValue>{
"MSP":msp.MSPConfig,
},
(continues on next page)
Values:map<string, *ConfigValue> {
"ConsensusType":orderer.ConsensusType,
"BatchSize":orderer.BatchSize,
"BatchTimeout":orderer.BatchTimeout,
"KafkaBrokers":orderer.KafkaBrokers,
},
},
"Consortiums":&ConfigGroup{
Groups:map<String, *ConfigGroup> {
{{consortium_name}}:&ConfigGroup{
Groups:map<string, *ConfigGroup> {
{{org_name}}:&ConfigGroup{
Values:map<string, *ConfigValue>{
"MSP":msp.MSPConfig,
},
},
},
Values:map<string, *ConfigValue> {
"ChannelCreationPolicy":common.Policy,
}
},
},
},
},
The ordering system channel needs to define ordering parameters, and consortiums for creating channels. There must
be exactly one ordering system channel for an ordering service, and it is the first channel to be created (or more
accurately bootstrapped). It is recommended never to define an Application section inside of the ordering system
channel genesis configuration, but may be done for testing. Note that any member with read access to the ordering
system channel may see all channel creations, so this channel’s access should be restricted.
The ordering parameters are defined as the following subset of config:
&ConfigGroup{
Groups: map<string, *ConfigGroup> {
"Orderer":&ConfigGroup{
Groups:map<String, *ConfigGroup> {
{{org_name}}:&ConfigGroup{
Values:map<string, *ConfigValue>{
"MSP":msp.MSPConfig,
},
},
(continues on next page)
Values:map<string, *ConfigValue> {
"ConsensusType":orderer.ConsensusType,
"BatchSize":orderer.BatchSize,
"BatchTimeout":orderer.BatchTimeout,
"KafkaBrokers":orderer.KafkaBrokers,
},
},
},
Each organization participating in ordering has a group element under the Orderer group. This group defines a
single parameter MSP which contains the cryptographic identity information for that organization. The Values of the
Orderer group determine how the ordering nodes function. They exist per channel, so orderer.BatchTimeout
for instance may be specified differently on one channel than another.
At startup, the orderer is faced with a filesystem which contains information for many channels. The orderer identifies
the system channel by identifying the channel with the consortiums group defined. The consortiums group has the
following structure.
&ConfigGroup{
Groups: map<string, *ConfigGroup> {
"Consortiums":&ConfigGroup{
Groups:map<String, *ConfigGroup> {
{{consortium_name}}:&ConfigGroup{
Groups:map<string, *ConfigGroup> {
{{org_name}}:&ConfigGroup{
Values:map<string, *ConfigValue>{
"MSP":msp.MSPConfig,
},
},
},
Values:map<string, *ConfigValue> {
"ChannelCreationPolicy":common.Policy,
}
},
},
},
},
},
Note that each consortium defines a set of members, just like the organizational members for the ordering orgs. Each
consortium also defines a ChannelCreationPolicy. This is a policy which is applied to authorize channel
creation requests. Typically, this value will be set to an ImplicitMetaPolicy requiring that the new members of
the channel sign to authorize the channel creation. More details about channel creation follow later in this document.
Application configuration is for channels which are designed for application type transactions. It is defined as follows:
&ConfigGroup{
Groups: map<string, *ConfigGroup> {
"Application":&ConfigGroup{
Groups:map<String, *ConfigGroup> {
{{org_name}}:&ConfigGroup{
(continues on next page)
Just like with the Orderer section, each organization is encoded as a group. However, instead of only encoding
the MSP identity information, each org additionally encodes a list of AnchorPeers. This list allows the peers of
different organizations to contact each other for peer gossip networking.
The application channel encodes a copy of the orderer orgs and consensus options to allow for deterministic updating of
these parameters, so the same Orderer section from the orderer system channel configuration is included. However
from an application perspective this may be largely ignored.
When the orderer receives a CONFIG_UPDATE for a channel which does not exist, the orderer assumes that this must
be a channel creation request and performs the following.
1. The orderer identifies the consortium which the channel creation request is to be performed for. It does this by
looking at the Consortium value of the top level group.
2. The orderer verifies that the organizations included in the Application group are a subset of the organizations
included in the corresponding consortium and that the ApplicationGroup is set to version 1.
3. The orderer verifies that if the consortium has members, that the new channel also has application members
(creation consortiums and channels with no members is useful for testing only).
4. The orderer creates a template configuration by taking the Orderer group from the ordering system channel,
and creating an Application group with the newly specified members and specifying its mod_policy to
be the ChannelCreationPolicy as specified in the consortium config. Note that the policy is evaluated in
the context of the new configuration, so a policy requiring ALL members, would require signatures from all the
new channel members, not all the members of the consortium.
5. The orderer then applies the CONFIG_UPDATE as an update to this template configuration. Because the
CONFIG_UPDATE applies modifications to the Application group (its version is 1), the config code
validates these updates against the ChannelCreationPolicy. If the channel creation contains any other
modifications, such as to an individual org’s anchor peers, the corresponding mod policy for the element will be
invoked.
6. The new CONFIG transaction with the new channel config is wrapped and sent for ordering on the ordering
system channel. After ordering, the channel is created.
Every chaincode has an endorsement policy which specifies the set of peers on a channel that must execute chaincode
and endorse the execution results in order for the transaction to be considered valid. These endorsement policies define
the organizations (through their peers) who must “endorse” (i.e., approve of) the execution of a proposal.
Note: Recall that state, represented by key-value pairs, is separate from blockchain data. For more on this, check out
our Ledger documentation.
As part of the transaction validation step performed by the peers, each validating peer checks to make sure that the
transaction contains the appropriate number of endorsements and that they are from the expected sources (both of
these are specified in the endorsement policy). The endorsements are also checked to make sure they’re valid (i.e., that
they are valid signatures from valid certificates).
By default, endorsement policies are specified for a channel’s chaincode at instantiation or upgrade time (that is, one
endorsement policy covers all of the state associated with a chaincode).
However, there are cases where it may be necessary for a particular state (a particular key-value pair, in other words) to
have a different endorsement policy. This state-based endorsement allows the default chaincode-level endorsement
policies to be overridden by a different policy for the specified keys.
To illustrate the circumstances in which these two types of endorsement policies might be used, consider a channel
on which cars are being exchanged. The “creation” — also known as “issuance” – of a car as an asset that can be
traded (putting the key-value pair that represents it into the world state, in other words) would have to satisfy the
chaincode-level endorsement policy. To see how to set a chaincode-level endorsement policy, check out the section
below.
If the car requires a specific endorsement policy, it can be defined either when the car is created or afterwards. There
are a number of reasons why it might be necessary or preferable to set a state-specific endorsement policy. The car
might have historical importance or value that makes it necessary to have the endorsement of a licensed appraiser.
Also, the owner of the car (if they’re a member of the channel) might also want to ensure that their peer signs off on
a transaction. In both cases, an endorsement policy is required for a particular asset that is different from the
default endorsement policies for the other assets associated with that chaincode.
We’ll show you how to define a state-based endorsement policy in a subsequent section. But first, let’s see how we set
a chaincode-level endorsement policy.
Chaincode-level endorsement policies can be specified at instantiate time using either the SDK (for some sample code
on how to do this, click here) or in the peer CLI using the -P switch followed by the policy.
Note: Don’t worry about the policy syntax ('Org1.member', et all) right now. We’ll talk more about the syntax
in the next section.
For example:
This command deploys chaincode mycc (“my chaincode”) with the policy AND('Org1.member', 'Org2.
member') which would require that a member of both Org1 and Org2 sign the transaction.
Notice that, if the identity classification is enabled (see Membership Service Providers (MSP)), one can use the PEER
role to restrict endorsement to only peers.
For example:
A new organization added to the channel after instantiation can query a chaincode (provided the query has appropriate
authorization as defined by channel policies and any application level checks enforced by the chaincode) but will not
be able to execute or endorse the chaincode. The endorsement policy needs to be modified to allow transactions to be
committed with endorsements from the new organization.
Note: if not specified at instantiation time, the endorsement policy defaults to “any member of the organizations
in the channel”. For example, a channel with “Org1” and “Org2” would have a default endorsement policy of
“OR(‘Org1.member’, ‘Org2.member’)”.
As you can see above, policies are expressed in terms of principals (“principals” are identities matched to a role).
Principals are described as 'MSP.ROLE', where MSP represents the required MSP ID and ROLE represents one of
the four accepted roles: member, admin, client, and peer.
Here are a few examples of valid principals:
• 'Org0.admin': any administrator of the Org0 MSP
• 'Org1.member': any member of the Org1 MSP
• 'Org1.client': any client of the Org1 MSP
• 'Org1.peer': any peer of the Org1 MSP
The syntax of the language is:
EXPR(E[, E...])
Where EXPR is either AND, OR, or OutOf, and E is either a principal (with the syntax described above) or another
nested call to EXPR.
For example:
• AND('Org1.member', 'Org2.member', 'Org3.member') requests one signature from each
of the three principals.
• OR('Org1.member', 'Org2.member') requests one signature from either one of the two princi-
pals.
• OR('Org1.member', AND('Org2.member', 'Org3.member')) requests either one signa-
ture from a member of the Org1 MSP or one signature from a member of the Org2 MSP and one signature
from a member of the Org3 MSP.
• OutOf(1, 'Org1.member', 'Org2.member'), which resolves to the same thing as
OR('Org1.member', 'Org2.member').
• Similarly, OutOf(2, 'Org1.member', 'Org2.member') is equivalent to AND('Org1.
member', 'Org2.member'), and OutOf(2, 'Org1.member', 'Org2.member',
'Org3.member') is equivalent to OR(AND('Org1.member', 'Org2.member'),
AND('Org1.member', 'Org3.member'), AND('Org2.member', 'Org3.member')).
Setting regular chaincode-level endorsement policies is tied to the lifecycle of the corresponding chaincode. They can
only be set or modified when instantiating or upgrading the corresponding chaincode on a channel.
In contrast, key-level endorsement policies can be set and modified in a more granular fashion from within a chaincode.
The modification is part of the read-write set of a regular transaction.
The shim API provides the following functions to set and retrieve an endorsement policy for/from a regular key.
Note: ep below stands for the “endorsement policy”, which can be expressed either by using the same syntax
described above or by using the convenience function described below. Either method will generate a binary version
of the endorsement policy that can be consumed by the basic shim API.
For keys that are part of Private data in a collection the following functions apply:
To help set endorsement policies and marshal them into validation parameter byte arrays, the Go shim provides an
extension with convenience functions that allow the chaincode developer to deal with endorsement policies in terms
of the MSP identifiers of organizations, see KeyEndorsementPolicy:
// AddOrgs adds the specified orgs to the list of orgs that are required
// to endorse
AddOrgs(roleType RoleType, organizations ...string) error
// DelOrgs delete the specified channel orgs from the existing key-level
˓→ endorsement
// policy for this KVS key. If any org is not present, an error will be returned.
DelOrgs(organizations ...string) error
// ListOrgs returns an array of channel orgs that are required to endorse changes
ListOrgs() ([]string)
}
For example, to set an endorsement policy for a key where two specific orgs are required to endorse the key change,
pass both org MSPIDs to AddOrgs(), and then call Policy() to construct the endorsement policy byte array that
can be passed to SetStateValidationParameter().
To add the shim extension to your chaincode as a dependency, see Managing external dependencies for chaincode
written in Go.
8.5.4 Validation
At commit time, setting a value of a key is no different from setting the endorsement policy of a key — both update
the state of the key and are validated based on the same rules.
As we discussed above, if a key is modified and no key-level endorsement policy is present, the chaincode-level
endorsement policy applies by default. This is also true when a key-level endorsement policy is set for a key for the
first time — the new key-level endorsement policy must first be endorsed according to the pre-existing chaincode-level
endorsement policy.
If a key is modified and a key-level endorsement policy is present, the key-level endorsement policy overrides the
chaincode-level endorsement policy. In practice, this means that the key-level endorsement policy can be either less
restrictive or more restrictive than the chaincode-level endorsement policy. Because the chaincode-level endorsement
policy must be satisfied in order to set a key-level endorsement policy for the first time, no trust assumptions have been
violated.
If a key’s endorsement policy is removed (set to nil), the chaincode-level endorsement policy becomes the default
again.
If a transaction modifies multiple keys with different associated key-level endorsement policies, all of these policies
need to be satisfied in order for the transaction to be valid.
8.6.1 Motivation
When a transaction is validated at time of commit, the peer performs various checks before applying the state changes
that come with the transaction itself:
• Validating the identities that signed the transaction.
• Verifying the signatures of the endorsers on the transaction.
• Ensuring the transaction satisfies the endorsement policies of the namespaces of the corresponding chaincodes.
There are use cases which demand custom transaction validation rules different from the default Fabric validation
rules, such as:
• UTXO (Unspent Transaction Output): When the validation takes into account whether the transaction doesn’t
double spend its inputs.
• Anonymous transactions: When the endorsement doesn’t contain the identity of the peer, but a signature and
a public key are shared that can’t be linked to the peer’s identity.
Fabric allows for the implementation and deployment of custom endorsement and validation logic into the peer to be
associated with chaincode handling in a pluggable manner. This logic can be either compiled into the peer as built in
selectable logic, or compiled and deployed alongside the peer as a Golang plugin.
Recall that every chaincode is associated with its own endorsement and validation logic at the time of chaincode in-
stantiation. If the user doesn’t select one, the default built-in logic is selected implicitly. A peer administrator may alter
the endorsement/validation logic that is selected by extending the peer’s local configuration with the customization of
the endorsement/validation logic which is loaded and applied at peer startup.
8.6.3 Configuration
Each peer has a local configuration (core.yaml) that declares a mapping between the endorsement/validation logic
name and the implementation that is to be run.
The default logic are called ESCC (with the “E” standing for endorsement) and VSCC (validation), and they can be
found in the peer local configuration in the handlers section:
handlers:
endorsers:
escc:
name: DefaultEndorsement
validators:
vscc:
name: DefaultValidation
When the endorsement or validation implementation is compiled into the peer, the name property represents the
initialization function that is to be run in order to obtain the factory that creates instances of the endorsement/validation
logic.
The function is an instance method of the HandlerLibrary construct under core/handlers/library/
library.go and in order for custom endorsement or validation logic to be added, this construct needs to be extended
with any additional methods.
Since this is cumbersome and poses a deployment challenge, one can also deploy custom endorsement and validation
as a Golang plugin by adding another property under the name called library.
For example, if we have custom endorsement and validation logic which is implemented as a plugin, we would have
the following entries in the configuration in core.yaml:
handlers:
endorsers:
escc:
name: DefaultEndorsement
custom:
name: customEndorsement
library: /etc/hyperledger/fabric/plugins/customEndorsement.so
validators:
vscc:
name: DefaultValidation
custom:
name: customValidation
library: /etc/hyperledger/fabric/plugins/customValidation.so
And we’d have to place the .so plugin files in the peer’s local file system.
Note: Hereafter, custom endorsement or validation logic implementation is going to be referred to as “plugins”, even
if they are compiled into the peer.
To implement an endorsement plugin, one must implement the Plugin interface found in core/handlers/
endorsement/api/endorsement.go:
// Returns:
// The Endorsement: A signature over the payload, and an identity that is used to
˓→verify the signature
// The payload that was given as input (could be modified within this function)
// Or error on failure
Endorse(payload []byte, sp *peer.SignedProposal) (*peer.Endorsement, []byte,
˓→error)
An endorsement plugin instance of a given plugin type (identified either by the method name as an instance method
of the HandlerLibrary or by the plugin .so file path) is created for each channel by having the peer invoke the
New method in the PluginFactory interface which is also expected to be implemented by the plugin developer:
// PluginFactory creates a new instance of a Plugin
type PluginFactory interface {
New() Plugin
}
The Init method is expected to receive as input all the dependencies declared under core/handlers/
endorsement/api/, identified as embedding the Dependency interface.
After the creation of the Plugin instance, the Init method is invoked on it by the peer with the dependencies
passed as parameters.
Currently Fabric comes with the following dependencies for endorsement plugins:
• SigningIdentityFetcher: Returns an instance of SigningIdentity based on a given signed pro-
posal:
// SigningIdentity signs messages and serializes its public identity to bytes
type SigningIdentity interface {
// Serialize returns a byte representation of this identity which is used to
˓→verify
• StateFetcher: Fetches a State object which interacts with the world state:
// State defines interaction with the world state
type State interface {
// GetPrivateDataMultipleKeys gets the values for the multiple private data items
˓→in a single call
// GetTransientByTXID gets the values private data associated with the given txID
GetTransientByTXID(txID string) ([]*rwset.TxPvtReadWriteSet, error)
To implement a validation plugin, one must implement the Plugin interface found in core/handlers/
validation/api/validation.go:
Each ContextDatum is additional runtime-derived metadata that is passed by the peer to the validation plugin.
Currently, the only ContextDatum that is passed is one that represents the endorsement policy of the chaincode:
A validation plugin instance of a given plugin type (identified either by the method name as an instance method of the
HandlerLibrary or by the plugin .so file path) is created for each channel by having the peer invoke the New
method in the PluginFactory interface which is also expected to be implemented by the plugin developer:
The Init method is expected to receive as input all the dependencies declared under core/handlers/
validation/api/, identified as embedding the Dependency interface.
After the creation of the Plugin instance, the Init method is invoked on it by the peer with the dependencies passed
as parameters.
Currently Fabric comes with the following dependencies for validation plugins:
• IdentityDeserializer: Converts byte representation of identities into Identity objects that can be
used to verify signatures signed by them, be validated themselves against their corresponding MSP, and see
whether they satisfy a given MSP Principal. The full specification can be found in core/handlers/
validation/api/identities/identities.go.
• StateFetcher: Fetches a State object which interacts with the world state:
// and an empty endKey refers to the last available key. For scanning all the
˓→keys, both the startKey and the endKey
• Validation plugin consistency across peers: In future releases, the Fabric channel infrastructure would guaran-
tee that the same validation logic is used for a given chaincode by all peers in the channel at any given blockchain
height in order to eliminate the chance of mis-configuration which would might lead to state divergence among
peers that accidentally run different implementations. However, for now it is the sole responsibility of the system
operators and administrators to ensure this doesn’t happen.
• Validation plugin error handling: Whenever a validation plugin can’t determine whether a given transaction
is valid or not, because of some transient execution problem like inability to access the database, it should
return an error of type ExecutionFailureError that is defined in core/handlers/validation/api/
validation.go. Any other error that is returned, is treated as an endorsement policy error and marks the
transaction as invalidated by the validation logic. However, if an ExecutionFailureError is returned, the
chain processing halts instead of marking the transaction as invalid. This is to prevent state divergence between
different peers.
• Error handling for private metadata retrieval: In case a plugin retrieves metadata for pri-
vate data by making use of the StateFetcher interface, it is important that errors are han-
dled as follows: CollConfigNotDefinedError'' and ``InvalidCollNameError'',
signalling that the specified collection does not exist, should be handled
as deterministic errors and should not lead the plugin to return an
``ExecutionFailureError.
• Importing Fabric code into the plugin: Importing code that belongs to Fabric other than protobufs as part
of the plugin is highly discouraged, and can lead to issues when the Fabric code changes between releases, or
can cause inoperability issues when running mixed peer versions. Ideally, the plugin code should only use the
dependencies given to it, and should import the bare minimum other than protobufs.
Note: This topic deals with access control and policies on a channel administration level. To learn about access
control within a chaincode, check out our chaincode for developers tutorial.
Fabric uses access control lists (ACLs) to manage access to resources by associating a policy — which specifies a rule
that evaluates to true or false, given a set of identities — with the resource. Fabric contains a number of default ACLs.
In this document, we’ll talk about how they’re formatted and how the defaults can be overridden.
But before we can do that, it’s necessary to understand a little about resources and policies.
Resources
Users interact with Fabric by targeting a user chaincode, system chaincode, or an events stream source. As such, these
endpoints are considered “resources” on which access control should be exercised.
Application developers need to be aware of these resources and the default policies associated with them. The complete
list of these resources are found in configtx.yaml. You can look at a sample configtx.yaml file here.
The resources named in configtx.yaml is an exhaustive list of all internal resources currently defined by Fab-
ric. The loose convention adopted there is <component>/<resource>. So cscc/GetConfigBlock is the
resource for the GetConfigBlock call in the CSCC component.
Policies
Policies are fundamental to the way Fabric works because they allow the identity (or set of identities) associated with
a request to be checked against the policy associated with the resource needed to fulfill the request. Endorsement
policies are used to determine whether a transaction has been appropriately endorsed. The policies defined in the
channel configuration are referenced as modification policies as well as for access control, and are defined in the
channel configuration itself.
Policies can be structured in one of two ways: as Signature policies or as an ImplicitMeta policy.
Signature policies
These policies identify specific users who must sign in order for a policy to be satisfied. For example:
Policies:
MyPolicy:
Type: Signature
Rule: “Org1.Peer OR Org2.Peer”
This policy construct can be interpreted as: the policy named MyPolicy can only be satisfied by the signature of an
identity with role of “a peer from Org1” or “a peer from Org2”.
Signature policies support arbitrary combinations of AND, OR, and NOutOf, allowing the construction of extremely
powerful rules like: “An admin of org A and two other admins, or 11 of 20 org admins”.
ImplicitMeta policies
ImplicitMeta policies aggregate the result of policies deeper in the configuration hierarchy that are ultimately
defined by Signature policies. They support default rules like “A majority of the organization admins”. These
policies use a different but still very simple syntax as compared to Signature policies: <ALL|ANY|MAJORITY>
<sub_policy>.
For example: ANY Readers or MAJORITY Admins.
Note that in the default policy configuration Admins have an operational role. Policies that specify that only Admins
— or some subset of Admins — have access to a resource will tend to be for sensitive or operational aspects of the
network (such as instantiating chaincode on a channel). Writers will tend to be able to propose ledger updates,
such as a transaction, but will not typically have administrative permissions. Readers have a passive role. They can
access information but do not have the permission to propose ledger updates nor do can they perform administrative
tasks. These default policies can be added to, edited, or supplemented, for example by the new peer and client
roles (if you have NodeOU support).
Here’s an example of an ImplicitMeta policy structure:
Policies:
AnotherPolicy:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
Here, the policy AnotherPolicy can be satisfied by the MAJORITY of Admins, where Admins is eventually
being specified by lower level Signature policy.
Access control defaults exist inside configtx.yaml, the file that configtxgen uses to build channel configura-
tions.
Access control can be updated one of two ways, either by editing configtx.yaml itself, which will propagate the
ACL change to any new channels, or by updating access control in the channel configuration of a particular channel.
ACLs are formatted as a key-value pair consisting of a resource function name followed by a string. To see what this
looks like, reference this sample configtx.yaml file.
Two excerpts from this sample:
# ACL policy for invoking chaincodes on peer
peer/Propose: /Channel/Application/Writers
These ACLs define that access to peer/Propose and event/Block resources is restricted to identities satisfying
the policy defined at the canonical path /Channel/Application/Writers and /Channel/Application/
Readers, respectively.
In cases where it will be necessary to override ACL defaults when bootstrapping a network, or to change the ACLs
before a channel has been bootstrapped, the best practice will be to update configtx.yaml.
Let’s say you want to modify the peer/Propose ACL default — which specifies the policy for invoking chaincodes
on a peer – from /Channel/Application/Writers to a policy called MyPolicy.
This is done by adding a policy called MyPolicy (it could be called anything, but for this example we’ll call it
MyPolicy). The policy is defined in the Application.Policies section inside configtx.yaml and spec-
ifies a rule to be checked to grant or deny access to a user. For this example, we’ll be creating a Signature policy
identifying SampleOrg.admin.
Policies: &ApplicationDefaultPolicies
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
MyPolicy:
Type: Signature
Rule: "OR('SampleOrg.admin')"
Then, edit the Application: ACLs section inside configtx.yaml to change peer/Propose from this:
peer/Propose: /Channel/Application/Writers
To this:
peer/Propose: /Channel/Application/MyPolicy
Once these fields have been changed in configtx.yaml, the configtxgen tool will use the policies and ACLs
defined when creating a channel creation transaction. When appropriately signed and submitted by one of the admins
of the consortium members, a new channel with the defined ACLs and policies is created.
Once MyPolicy has been bootstrapped into the channel configuration, it can also be referenced to override other
ACL defaults. For example:
SampleSingleMSPChannel:
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
ACLs:
<<: *ACLsDefault
event/Block: /Channel/Application/MyPolicy
If channels have already been created that want to use this ACL, they’ll have to update their channel configurations
one at a time using the following flow:
If channels have already been created that want to use MyPolicy to restrict access to peer/Propose — or if
they want to create ACLs they don’t want other channels to know about — they’ll have to update their channel
configurations one at a time through config update transactions.
Note: Channel configuration transactions are an involved process we won’t delve into here. If you want to read more
about them check out our document on channel configuration updates and our “Adding an Org to a Channel” tutorial.
After pulling, translating, and stripping the configuration block of its metadata, you would edit the configuration by
adding MyPolicy under Application: policies, where the Admins, Writers, and Readers policies
already live.
"MyPolicy": {
"mod_policy": "Admins",
"policy": {
"type": 1,
"value": {
"identities": [
{
"principal": {
"msp_identifier": "SampleOrg",
"role": "ADMIN"
},
"principal_classification": "ROLE"
}
],
"rule": {
"n_out_of": {
"n": 1,
"rules": [
{
"signed_by": 0
}
]
}
},
"version": 0
}
},
"version": "0"
},
"peer/Propose": {
"policy_ref": "/Channel/Application/Writers"
To this:
"peer/Propose": {
"policy_ref": "/Channel/Application/MyPolicy"
Note: If you do not have ACLs defined in your channel configuration, you will have to add the entire ACL structure.
Once the configuration has been updated, it will need to be submitted by the usual channel update process.
If a member makes a request that calls multiple system chaincodes, all of the ACLs for those system chaincodes must
be satisfied.
For example, peer/Propose refers to any proposal request on a channel. If the particular proposal requires access
to two system chaincodes that requires an identity satisfying Writers and one system chaincode that requires an
identity satisfying MyPolicy, then the member submitting the proposal must have an identity that evaluates to “true”
for both Writers and MyPolicy.
In the default configuration, Writers is a signature policy whose rule is SampleOrg.member. In other words,
“any member of my organization”. MyPolicy, listed above, has a rule of SampleOrg.admin, or “any admin
of my organization”. To satisfy these ACLs, the member would have to be both an administrator and a member
of SampleOrg. By default, all administrators are members (though not all administrators are members), but it is
possible to overwrite these policies to whatever you want them to be. As a result, it’s important to keep track of these
policies to ensure that the ACLs for peer proposals are not impossible to satisfy (unless that is the intention).
Previously, the management of access control lists was done in an isolated_data section of the channel cre-
ation transaction and updated via PEER_RESOURCE_UPDATE transactions. Originally, it was thought that the
resources tree would handle the update of several functions that, ultimately, were handled in other ways, so main-
taining a separate parallel peer configuration tree was judged to be unnecessary.
Migration for customers using the experimental resources tree in v1.1 is possible. Because the official v1.2 release
does not support the old ACL methods, the network operators should shut down all their peers. Then, they should
upgrade them to v1.2, submit a channel reconfiguration transaction which enables the v1.2 capability and sets the
desired ACLs, and then finally restart the upgraded peers. The restarted peers will immediately consume the new
channel configuration and enforce the ACLs as desired.
Idemix is a cryptographic protocol suite, which provides strong authentication as well as privacy-preserving features
such as anonymity, the ability to transact without revealing the identity of the transactor, and unlinkability, the ability
of a single identity to send multiple transactions without revealing that the transactions were sent by the same identity.
There are three actors involved in an Idemix flow: user, issuer, and verifier.
• An issuer certifies a set of user’s attributes are issued in the form of a digital certificate, hereafter called “cre-
dential”.
• The user later generates a “zero-knowledge proof” of possession of the credential and also selectively discloses
only the attributes the user chooses to reveal. The proof, because it is zero-knowledge, reveals no additional
information to the verifier, issuer, or anyone else.
As an example, suppose “Alice” needs to prove to Bob (a store clerk) that she has a driver’s license issued to her by
the DMV.
In this scenario, Alice is the user, the DMV is the issuer, and Bob is the verifier. In order to prove to Bob that Alice
has a driver’s license, she could show it to him. However, Bob would then be able to see Alice’s name, address, exact
age, etc. — much more information than Bob needs to know.
Instead, Alice can use Idemix to generate a “zero-knowledge proof” for Bob, which only reveals that she has a valid
driver’s license and nothing else.
So from the proof:
• Bob does not learn any additional information about Alice other than the fact that she has a valid license
(anonymity).
• If Alice visits the store multiple times and generates a proof each time for Bob, Bob would not be able to tell
from the proof that it was the same person (unlinkability).
Idemix authentication technology provides the trust model and security guarantees that are similar to what is ensured
by standard X.509 certificates but with underlying cryptographic algorithms that efficiently provide advanced privacy
features including the ones described above. We’ll compare Idemix and X.509 technologies in detail in the technical
section below.
To understand how to use Idemix with Hyperledger Fabric, we need to see which Fabric components correspond to
the user, issuer, and verifier in Idemix.
• The Fabric Java SDK is the API for the user. In the future, other Fabric SDKs will also support Idemix.
• Fabric provides two possible Idemix issuers:
1. Fabric CA for production environments or development, and
2. the idemixgen tool for development environments.
• The verifier is an Idemix MSP in Fabric.
In order to use Idemix in Hyperledger Fabric, the following three basic steps are required:
- &Org1Idemix
# defaultorg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
name: idemixMSP1
msptype: idemix
mspdir: crypto-config/peerOrganizations/org3.example.com
The msptype is set to idemix and the contents of the mspdir directory (crypto-config/
peerOrganizations/org3.example.com/msp in this example) contains the IssuerPublicKey
and IssuerRevocationPublicKey files.
Note that in this example, Org1Idemix represents the Idemix MSP for Org1 (not shown), which would also
have an X509 MSP.
3. Consider the user. Recall that the Java SDK is the API for the user.
There is only a single additional API call required in order to use Idemix with the Java SDK: the
idemixEnroll method of the org.hyperledger.fabric_ca.sdk.HFCAClient class. For
example, assume hfcaClient is your HFCAClient object and x509Enrollment is your org.
hyperledger.fabric.sdk.Enrollment associated with your X509 certificate.
The following call will return an org.hyperledger.fabric.sdk.Enrollment object associated with
your Idemix credential.
From a verifier perspective, there is one more actor to consider: chaincode. What can chaincode learn about the
transactor when an Idemix credential is used?
The cid (Client Identity) library (for golang only) has been extended to support the GetAttributeValue function
when an Idemix credential is used. However, as mentioned in the “Current limitations” section below, there are only
two attributes which are disclosed in the Idemix case: ou and role.
If Fabric CA is the credential issuer:
• the value of the ou attribute is the identity’s affiliation (e.g. “org1.department1”);
• the value of the role attribute will be either ‘member’ or ‘admin’. A value of ‘admin’ means that the identity
is an MSP administrator. By default, identities created by Fabric CA will return the ‘member’ role. In order to
create an ‘admin’ identity, register the identity with the role attribute and a value of 2.
For an example of setting an affiliation in the Java SDK see this sample.
For an example of using the CID library in go chaincode to retrieve attributes, see this go chaincode.
The certificate/credential concept and the issuance process are very similar in Idemix and X.509 certs: a set of at-
tributes is digitally signed with a signature that cannot be forged and there is a secret key to which a credential is
cryptographically bound.
The main difference between a standard X.509 certificate and an Identity Mixer credential is the signature scheme
that is used to certify the attributes. The signatures underlying the Identity Mixer system allow for efficient proofs of
the possession of a signature and the corresponding attributes without revealing the signature and (selected) attribute
values themselves. We use zero-knowledge proofs to ensure that such “knowledge” or “information” is not revealed
while ensuring that the signature over some attributes is valid and the user is in possession of the corresponding
credential secret key.
Such proofs, like X.509 certificates, can be verified with the public key of the authority that originally signed the
credential and cannot be successfully forged. Only the user who knows the credential secret key can generate the
proofs about the credential and its attributes.
With regard to unlinkability, when an X.509 certificate is presented, all attributes have to be revealed to verify the
certificate signature. This implies that all certificate usages for signing transactions are linkable.
To avoid such linkability, fresh X.509 certificates need to be used every time, which results in complex key manage-
ment and communication and storage overhead. Furthermore, there are cases where it is important that not even the
CA issuing the certificates is able to link all the transactions to the user.
Idemix helps to avoid linkability with respect to both the CA and verifiers, since even the CA is not able to link proofs
to the original credential. Neither the issuer nor a verifier can tell whether two proofs were derived from the same
credential (or from two different ones).
More details on the concepts and features of the Identity Mixer technology are described in the paper Concepts and
Languages for Privacy-Preserving Attribute-Based Authentication.
Topology Information
Given the above limitations, it is recommended to have only one Idemix-based MSP per channel or, at the extreme,
per network. Indeed, for example, having multiple Idemix-based MSPs per channel would allow a party, reading the
ledger of that channel, to tell apart transactions signed by parties belonging to different Idemix-based MSPs. This is
because, each transaction leak the MSP-ID of the signer. In other words, Idemix currently provides only anonymity of
clients among the same organization (MSP).
In the future, Idemix could be extended to support anonymous hierarchies of Idemix-based Certification Authori-
ties whose certified credentials can be verified by using a unique public-key, therefore achieving anonymity across
organizations (MSPs). This would allow multiple Idemix-based MSPs to coexist in the same channel.
In principal, a channel can be configured to have a single Idemix-based MSP and multiple X.509-based MSPs. Of
course, the interaction between these MSP can potential leak information. An assessment of the leaked information
need to be done case by case.wq
Idemix technology is built from a blind signature scheme that supports multiple messages and efficient zero-knowledge
proofs of signature possession. All of the cryptographic building blocks for Idemix were published at the top confer-
ences and journals and verified by the scientific community.
This particular Idemix implementation for Fabric uses a pairing-based signature scheme that was briefly proposed by
Camenisch and Lysyanskaya and described in detail by Au et al.. The ability to prove knowledge of a signature in a
zero-knowledge proof Camenisch et al. was used.
This document describes the usage for the idemixgen utility, which can be used to create configuration files for the
identity mixer based MSP. Two commands are available, one for creating a fresh CA key pair, and one for creating an
MSP config using a previously generated CA key.
The idemixgen tool will create directories with the following structure:
- /ca/
IssuerSecretKey
IssuerPublicKey
RevocationKey
- /msp/
IssuerPublicKey
RevocationPublicKey
- /user/
SignerConfig
The ca directory contains the issuer secret key (including the revocation key) and should only be present for a CA. The
msp directory contains the information required to set up an MSP verifying idemix signatures. The user directory
specifies a default signer.
CA (issuer) keys suitable for identity mixer can be created using command idemixgen ca-keygen. This will
create directories ca and msp in the working directory.
After generating the ca and msp directories with idemixgen ca-keygen, a default signer specified in the user
directory can be added to the config with idemixgen signerconfig.
$ idemixgen signerconfig -h
usage: idemixgen signerconfig [<flags>]
Flags:
-h, --help Show context-sensitive help (also try --help-long and --
˓→help-man).
For example, we can create a default signer that is a member of organizational unit “OrgUnit1”, with enrollment
identity “johndoe”, revocation handle “1234”, and that is an admin, with the following command:
The peer and the orderer host an HTTP server that offers a RESTful “operations” API. This API is unrelated to the
Fabric network services and is intended to be used by operators, not administrators or “users” of the network.
Peer
For each peer, the operations server can be configured in the operations section of core.yaml:
operations:
# host and port for the operations server
listenAddress: 127.0.0.1:9443
The listenAddress key defines the host and port that the operation server will listen on. If the server should listen
on all addresses, the host portion can be omitted.
The tls section is used to indicate whether or not TLS is enabled for the operations service, the location of the
service’s certificate and private key, and the locations of certificate authority root certificates that should be trusted for
client authentication. When clientAuthRequired is true, clients will be required to provide a certificate for
authentication.
Orderer
For each orderer, the operations server can be configured in the Operations section of orderer.yaml:
Operations:
# host and port for the operations server
ListenAddress: 127.0.0.1:8443
The ListenAddress key defines the host and port that the operations server will listen on. If the server should
listen on all addresses, the host portion can be omitted.
The TLS section is used to indicate whether or not TLS is enabled for the operations service, the location of the
service’s certificate and private key, and the locations of certificate authority root certificates that should be trusted for
client authentication. When ClientAuthRequired is true, clients will be required to provide a certificate for
authentication.
Operations Security
As the operations service is focused on operations and intentionally unrelated to the Fabric network, it does not use
the Membership Services Provider for access control. Instead, the operations service relies entirely on mutual TLS
with client certificate authentication.
It is highly recommended to enable mutual TLS by setting the value of clientAuthRequired to true in pro-
duction environments. With this configuration, clients are required to provide a valid certificate for authentication. If
the client does not provide a certificate or the service cannot verify the client’s certificate, the request is rejected. Note
that if clientAuthRequired is set to false, clients do not need to provide a certificate; if they do, however, and
the service cannot verify the certificate, then the request will be rejected.
When TLS is disabled, authorization is bypassed and any client that can connect to the operations endpoint will be
able to use the API.
The operations service provides a /logspec resource that operators can use to manage the active logging spec for a
peer or orderer. The resource is a conventional REST resource and supports GET and PUT requests.
When a GET /logspec request is received by the operations service, it will respond with a JSON payload that
contains the current logging specification:
{"spec":"info"}
When a PUT /logspec request is received by the operations service, it will read the body as a JSON payload. The
payload must consist of a single attribute named spec.
{"spec":"chaincode=debug:info"}
If the spec is activated successfully, the service will respond with a 204 "No Content" response. If an error
occurs, the service will respond with a 400 "Bad Request" and an error payload:
{"error":"error message"}
The operations service provides a /healthz resource that operators can use to help determine the liveness and health
of peers and orderers. The resource is a conventional REST resource that supports GET requests. The implementation
is intended to be compatible with the liveness probe model used by Kubernetes but can be used in other contexts.
When a GET /healthz request is received, the operations service will call all registered health checkers for the
process. When all of the health checkers return successfully, the operations service will respond with a 200 "OK"
and a JSON body:
{
"status": "OK",
"time": "2009-11-10T23:00:00Z"
}
If one or more of the health checkers returns an error, the operations service will respond with a 503 "Service
Unavailable" and a JSON body that includes information about which health checker failed:
{
"status": "Service Unavailable",
"time": "2009-11-10T23:00:00Z",
"failed_checks": [
{
"component": "docker",
"reason": "failed to connect to Docker daemon: invalid endpoint"
}
]
}
In the current version, the only health check that is registered is for Docker. Future versions will be enhanced to add
additional health checks.
When TLS is enabled, a valid client certificate is not required to use this service unless requireClientAuth is
set to true.
8.10.3 Metrics
Some components of the Fabric peer and orderer expose metrics that can help provide insight into the behavior of the
system. Operators and administrators can use this information to better understand how the system is performing over
time.
Configuring Metrics
Fabric provides two ways to expose metrics: a pull model based on Prometheus and a push model based on StatsD.
Prometheus
A typical Prometheus deployment scrapes metrics by requesting them from an HTTP endpoint exposed by instru-
mented targets. As Prometheus is responsible for requesting the metrics, it is considered a pull system.
When configured, a Fabric peer or orderer will present a /metrics resource on the operations service.
Peer
A peer can be configured to expose a /metrics endpoint for Prometheus to scrape by setting the metrics provider to
prometheus in the metrics section of core.yaml.
metrics:
provider: prometheus
Orderer
An orderer can be configured to expose a /metrics endpoint for Prometheus to scrape by setting the metrics provider
to prometheus in the Metrics section of orderer.yaml.
Metrics:
Provider: prometheus
StatsD
StatsD is a simple statistics aggregation daemon. Metrics are sent to a statsd daemon where they are collected,
aggregated, and pushed to a backend for visualization and alerting. As this model requires instrumented processes to
send metrics data to StatsD, this is considered a push system.
Peer
A peer can be configured to send metrics to StatsD by setting the metrics provider to statsd in the metrics
section of core.yaml. The statsd subsection must also be configured with the address of the StatsD daemon,
the network type to use (tcp or udp), and how often to send the metrics. An optional prefix may be specified to
help differentiate the source of the metrics — for example, differentiating metrics coming from separate peers — that
would be prepended to all generated metrics.
metrics:
provider: statsd
statsd:
network: udp
address: 127.0.0.1:8125
writeInterval: 10s
prefix: peer-0
Orderer
An orderer can be configured to send metrics to StatsD by setting the metrics provider to statsd in the Metrics
section of orderer.yaml. The Statsd subsection must also be configured with the address of the StatsD daemon,
the network type to use (tcp or udp), and how often to send the metrics. An optional prefix may be specified to
help differentiate the source of the metrics.
Metrics:
Provider: statsd
Statsd:
Network: udp
Address: 127.0.0.1:8125
WriteInterval: 30s
Prefix: org-orderer
For a look at the different metrics that are generated, check out Metrics Reference.
The following metrics are currently emitted for consumption by StatsD. The %{variable_name} nomenclature
represents segments that vary based on context.
For example, %{channel} will be replaced with the name of the channel associated with the metric.
Hyperledger Fabric code should use the vendored package github.com/pkg/errors in place of the standard error type
provided by Go. This package allows easy generation and display of stack traces with error messages.
github.com/pkg/errors should be used in place of all calls to fmt.Errorf() or errors.New(). Using this
package will generate a call stack that will be appended to the error message.
Using this package is simple and will only require easy tweaks to your code.
First, you’ll need to import github.com/pkg/errors.
Next, update all errors that are generated by your code to use one of the error creation functions (errors.New(), er-
rors.Errorf(), errors.WithMessage(), errors.Wrap(), errors.Wrapf().
Note: See https://godoc.org/github.com/pkg/errors for complete documentation of the available error creation func-
tion. Also, refer to the General guidelines section below for more specific guidelines for using the package for Fabric
code.
Finally, change the formatting directive for any logger or fmt.Printf() calls from %s to %+v to print the call stack along
with the error message.
• If you are servicing a user request, you should log the error and return it.
• If the error comes from an external source, such as a Go library or vendored package, wrap the error using
errors.Wrap() to generate a call stack for the error.
• If the error comes from another Fabric function, add further context, if desired, to the error message using
errors.WithMessage() while leaving the call stack unaffected.
• A panic should not be allowed to propagate to other packages.
The following example program provides a clear demonstration of using the package:
package main
import (
"fmt"
"github.com/pkg/errors"
)
func main() {
err := createError()
fmt.Printf("print error without stack: %s\n\n", err)
fmt.Printf("print error with stack: %+v\n\n", err)
err = wrapWithoutStack()
fmt.Printf("%+v\n\n", err)
err = wrapWithStack()
fmt.Printf("%+v\n\n", err)
}
8.13.1 Overview
Logging in the peer and orderer is provided by the common/flogging package. Chaincodes written in Go
also use this package if they use the logging methods provided by the shim. This package supports
• Logging control based on the severity of the message
• Logging control based on the software logger generating the message
• Different pretty-printing options based on the severity of the message
All logs are currently directed to stderr. Global and logger-level control of logging by severity is provided for both
users and developers. There are currently no formalized rules for the types of information provided at each severity
level. When submitting bug reports, developers may want to see full logs down to the DEBUG level.
In pretty-printed logs the logging level is indicated both by color and by a four-character code, e.g, “ERRO” for
ERROR, “DEBU” for DEBUG, etc. In the logging context a logger is an arbitrary name (string) given by developers
to groups of related messages. In the pretty-printed example below, the loggers ledgermgmt, kvledger, and
peer are generating logs.
:: 2018-11-01 15:32:38.268 UTC [ledgermgmt] initialize -> INFO 002 Initializing ledger mgmt 2018-11-01
15:32:38.268 UTC [kvledger] NewProvider -> INFO 003 Initializing ledger provider 2018-11-01 15:32:38.342
UTC [kvledger] NewProvider -> INFO 004 ledger provider Initialized 2018-11-01 15:32:38.357 UTC [ledger-
mgmt] initialize -> INFO 005 ledger mgmt initialized 2018-11-01 15:32:38.357 UTC [peer] func1 -> INFO
006 Auto-detected peer address: 172.24.0.3:7051 2018-11-01 15:32:38.357 UTC [peer] func1 -> INFO 007
Returning peer0.org1.example.com:7051
An arbitrary number of loggers can be created at runtime, therefore there is no “master list” of loggers, and logging
control constructs can not check whether logging loggers actually do or will exist.
The logging levels of the peer and orderer commands are controlled by a logging specification, which is set via
the FABRIC_LOGGING_SPEC environment variable.
The full logging level specification is of the form
[<logger>[,<logger>...]=]<level>[:[<logger>[,<logger>...]=]<level>...]
Logging severity levels are specified using case-insensitive strings chosen from
A logging level by itself is taken as the overall default. Otherwise, overrides for individual or groups of loggers can be
specified using the
<logger>[,<logger>...]=<level>
The logging format of the peer and orderer commands is controlled via the FABRIC_LOGGING_FORMAT envi-
ronment variable. This can be set to a format string, such as the default
to print the logs in a human-readable console format. It can be also set to json to output logs in JSON format.
8.13.4 Go chaincodes
The standard mechanism to log within a chaincode application is to integrate with the logging transport exposed to
each chaincode instance via the peer. The chaincode shim package provides APIs that allow a chaincode to create
and manage logging objects whose logs will be formatted and interleaved consistently with the shim logs.
As independently executed programs, user-provided chaincodes may technically also produce output on stdout/stderr.
While naturally useful for “devmode”, these channels are normally disabled on a production network to mitigate abuse
from broken or malicious code. However, it is possible to enable this output even for peer-managed containers (e.g.
“netmode”) on a per-peer basis via the CORE_VM_DOCKER_ATTACHSTDOUT=true configuration option.
Once enabled, each chaincode will receive its own logging channel keyed by its container-id. Any output written to
either stdout or stderr will be integrated with the peer’s log on a per-line basis. It is not recommended to enable this
for production.
API
which can be used directly, or generated by passing a case-insensitive version of the strings
The f forms of the logging APIs provide for precise control over the formatting of the logs. The non-f forms of the
APIs currently insert a space between the printed representations of the arguments, and arbitrarily choose the formats
to use.
In the current implementation, the logs produced by the shim and a ChaincodeLogger are timestamped, marked
with the logger name and severity level, and written to stderr. Note that logging level control is currently based on
the name provided when the ChaincodeLogger is created. To avoid ambiguities, all ChaincodeLogger should
be given unique names other than “shim”. The logger name will appear in all log messages created by the logger. The
shim logs as “shim”.
The default logging level for loggers within the Chaincode container can be set in the core.yaml file. The key
chaincode.logging.level sets the default level for all loggers within the Chaincode container. The key
chaincode.logging.shim overrides the default level for the shim logger.
The default logging level can be overridden by using environment variables. CORE_CHAINCODE_LOGGING_LEVEL
sets the default logging level for all loggers. CORE_CHAINCODE_LOGGING_SHIM overrides the level for the shim
logger.
Go language chaincodes can also control the logging level of the chaincode shim interface through the
SetLoggingLevel API.
SetLoggingLevel(LoggingLevel level) - Control the logging level of the shim
Below is a simple example of how a chaincode might create a private logging object logging at the LogInfo level.
func main() {
logger.SetLevel(shim.LogInfo)
...
}
Fabric supports for secure communication between nodes using TLS. TLS communication can use both one-way
(server only) and two-way (server and client) authentication.
A peer node is both a TLS server and a TLS client. It is the former when another peer node, application, or the CLI
makes a connection to it and the latter when it makes a connection to another peer node or orderer.
To enable TLS on a peer node set the following peer configuration properties:
• peer.tls.enabled = true
• peer.tls.cert.file = fully qualified path of the file that contains the TLS server certificate
• peer.tls.key.file = fully qualified path of the file that contains the TLS server private key
• peer.tls.rootcert.file = fully qualified path of the file that contains the certificate chain of the cer-
tificate authority(CA) that issued TLS server certificate
By default, TLS client authentication is turned off when TLS is enabled on a peer node. This means that the peer node
will not verify the certificate of a client (another peer node, application, or the CLI) during a TLS handshake. To enable
TLS client authentication on a peer node, set the peer configuration property peer.tls.clientAuthRequired
to true and set the peer.tls.clientRootCAs.files property to the CA chain file(s) that contain(s) the CA
certificate chain(s) that issued TLS certificates for your organization’s clients.
By default, a peer node will use the same certificate and private key pair when acting as a TLS server and client.
To use a different certificate and private key pair for the client side, set the peer.tls.clientCert.file and
peer.tls.clientKey.file configuration properties to the fully qualified path of the client certificate and key
file, respectively.
TLS with client authentication can also be enabled by setting the following environment variables:
• CORE_PEER_TLS_ENABLED = true
• CORE_PEER_TLS_CERT_FILE = fully qualified path of the server certificate
• CORE_PEER_TLS_KEY_FILE = fully qualified path of the server private key
• CORE_PEER_TLS_ROOTCERT_FILE = fully qualified path of the CA chain file
• CORE_PEER_TLS_CLIENTAUTHREQUIRED = true
• CORE_PEER_TLS_CLIENTROOTCAS_FILES = fully qualified path of the CA chain file
• CORE_PEER_TLS_CLIENTCERT_FILE = fully qualified path of the client certificate
• CORE_PEER_TLS_CLIENTKEY_FILE = fully qualified path of the client key
When client authentication is enabled on a peer node, a client is required to send its certificate during a TLS handshake.
If the client does not send its certificate, the handshake will fail and the peer will close the connection.
When a peer joins a channel, root CA certificate chains of the channel members are read from the config block of the
channel and are added to the TLS client and server root CAs data structure. So, peer to peer communication, peer to
orderer communication should work seamlessly.
To enable TLS on an orderer node, set the following orderer configuration properties:
• General.TLS.Enabled = true
• General.TLS.PrivateKey = fully qualified path of the file that contains the server private key
• General.TLS.Certificate = fully qualified path of the file that contains the server certificate
• General.TLS.RootCAs = fully qualified path of the file that contains the certificate chain of the CA that
issued TLS server certificate
By default, TLS client authentication is turned off on orderer, as is the case with peer. To enable TLS client authenti-
cation, set the following config properties:
• General.TLS.ClientAuthRequired = true
• General.TLS.ClientRootCAs = fully qualified path of the file that contains the certificate chain of the
CA that issued the TLS server certificate
TLS with client authentication can also be enabled by setting the following environment variables:
• ORDERER_GENERAL_TLS_ENABLED = true
• ORDERER_GENERAL_TLS_PRIVATEKEY = fully qualified path of the file that contains the server private key
• ORDERER_GENERAL_TLS_CERTIFICATE = fully qualified path of the file that contains the server certificate
• ORDERER_GENERAL_TLS_ROOTCAS = fully qualified path of the file that contains the certificate chain of
the CA that issued TLS server certificate
• ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED = true
• ORDERER_GENERAL_TLS_CLIENTROOTCAS = fully qualified path of the file that contains the certificate
chain of the CA that issued TLS server certificate
The following environment variables must be set when running peer CLI commands against a TLS enabled peer node:
• CORE_PEER_TLS_ENABLED = true
• CORE_PEER_TLS_ROOTCERT_FILE = fully qualified path of the file that contains cert chain of the CA that
issued the TLS server cert
If TLS client authentication is also enabled on the remote server, the following variables must to be set in addition to
those above:
• CORE_PEER_TLS_CLIENTAUTHREQUIRED = true
• CORE_PEER_TLS_CLIENTCERT_FILE = fully qualified path of the client certificate
• CORE_PEER_TLS_CLIENTKEY_FILE = fully qualified path of the client private key
When running a command that connects to orderer service, like peer channel <create|update|fetch> or peer chaincode
<invoke|instantiate>, following command line arguments must also be specified if TLS is enabled on the orderer:
• –tls
• –cafile <fully qualified path of the file that contains cert chain of the orderer CA>
If TLS client authentication is enabled on the orderer, the following arguments must be specified as well:
• –clientauth
• –keyfile <fully qualified path of the file that contains the client private key>
• –certfile <fully qualified path of the file that contains the client certificate>
Before debugging TLS issues, it is advisable to enable GRPC debug on both the TLS client and the server side to get
additional information. To enable GRPC debug, set the environment variable FABRIC_LOGGING_SPEC to include
grpc=debug. For example, to set the default logging level to INFO and the GRPC logging level to DEBUG, set the
logging specification to grpc=debug:info.
If you see the error message remote error: tls: bad certificate on the client side, it usually means
that the TLS server has enabled client authentication and the server either did not receive the correct client certificate
or it received a client certificate that it does not trust. Make sure the client is sending its certificate and that it has been
signed by one of the CA certificates trusted by the peer or orderer node.
If you see the error message remote error: tls: bad certificate in your chaincode logs, ensure that
your chaincode has been built using the chaincode shim provided with Fabric v1.1 or newer. If your chaincode does not
contain a vendored copy of the shim, deleting the chaincode container and restarting its peer will rebuild the chaincode
container using the current shim version.
This document assumes that the reader knows how to set up a Kafka cluster and a ZooKeeper ensemble, and keep
them secure for general usage by preventing unauthorized access. The sole purpose of this guide is to identify the
steps you need to take so as to have a set of Hyperledger Fabric ordering service nodes (OSNs) use your Kafka cluster
and provide an ordering service to your blockchain network.
Each channel maps to a separate single-partition topic in Kafka. When an OSN receives transactions via the
Broadcast RPC, it checks to make sure that the broadcasting client has permissions to write on the channel, then
relays (i.e. produces) those transactions to the appropriate partition in Kafka. This partition is also consumed by the
OSN which groups the received transactions into blocks locally, persists them in its local ledger, and serves them to
receiving clients via the Deliver RPC. For low-level details, refer to the document that describes how we came to
this design — Figure 8 is a schematic representation of the process described above.
8.15.3 Steps
Let K and Z be the number of nodes in the Kafka cluster and the ZooKeeper ensemble respectively:
1. At a minimum, K should be set to 4. (As we will explain in Step 4 below, this is the minimum number of
nodes necessary in order to exhibit crash fault tolerance, i.e. with 4 brokers, you can have 1 broker go down, all
channels will continue to be writeable and readable, and new channels can be created.)
2. Z will either be 3, 5, or 7. It has to be an odd number to avoid split-brain scenarios, and larger than 1 in order to
avoid single point of failures. Anything beyond 7 ZooKeeper servers is considered an overkill.
Then proceed as follows:
3. Orderers: Encode the Kafka-related information in the network’s genesis block. If you are using
configtxgen, edit configtx.yaml —or pick a preset profile for the system channel’s genesis block— so
that:
(a) Orderer.OrdererType is set to kafka.
(b) Orderer.Kafka.Brokers contains the address of at least two of the Kafka brokers in your cluster in
IP:port notation. The list does not need to be exhaustive. (These are your bootstrap brokers.)
4. Orderers: Set the maximum block size. Each block will have at most Orderer.AbsoluteMaxBytes bytes (not
including headers), a value that you can set in configtx.yaml. Let the value you pick here be A and make
note of it — it will affect how you configure your Kafka brokers in Step 6.
5. Orderers: Create the genesis block. Use configtxgen. The settings you picked in Steps 3 and 4 above are
system-wide settings, i.e. they apply across the network for all the OSNs. Make note of the genesis block’s
location.
6. Kafka cluster: Configure your Kafka brokers appropriately. Ensure that every Kafka broker has these keys
configured:
(a) unclean.leader.election.enable = false — Data consistency is key in a blockchain envi-
ronment. We cannot have a channel leader chosen outside of the in-sync replica set, or we run the risk of
overwriting the offsets that the previous leader produced, and —as a result— rewrite the blockchain that
the orderers produce.
(b) min.insync.replicas = M — Where you pick a value M such that 1 < M < N (see default.
replication.factor below). Data is considered committed when it is written to at least M replicas
(which are then considered in-sync and belong to the in-sync replica set, or ISR). In any other case, the
write operation returns an error. Then:
i. If up to N-M replicas —out of the N that the channel data is written to— become unavailable, opera-
tions proceed normally.
ii. If more replicas become unavailable, Kafka cannot maintain an ISR set of M, so it stops accepting
writes. Reads work without issues. The channel becomes writeable again when M replicas get in-sync.
(c) default.replication.factor = N — Where you pick a value N such that N < K. A replication
factor of N means that each channel will have its data replicated to N brokers. These are the candidates
for the ISR set of a channel. As we noted in the min.insync.replicas section above, not all
of these brokers have to be available all the time. N should be set strictly smaller to K because channel
creations cannot go forward if less than N brokers are up. So if you set N = K, a single broker going down
means that no new channels can be created on the blockchain network — the crash fault tolerance of the
ordering service is non-existent.
Based on what we’ve described above, the minimum allowed values for M and N are 2 and 3 respectively.
This configuration allows for the creation of new channels to go forward, and for all channels to continue
to be writeable.
(d) message.max.bytes and replica.fetch.max.bytes should be set to a value larger than A, the
value you picked in Orderer.AbsoluteMaxBytes in Step 4 above. Add some buffer to account for
headers — 1 MiB is more than enough. The following condition applies:
set appropriately.)
8. Orderers: Adjust polling intervals and timeouts. (Optional step.)
(a) The Kafka.Retry section in the orderer.yaml file allows you to adjust the frequency of the meta-
data/producer/consumer requests, as well as the socket timeouts. (These are all settings you would expect
to see in a Kafka producer or consumer.)
(b) Additionally, when a new channel is created, or when an existing channel is reloaded (in case of a just-
restarted orderer), the orderer interacts with the Kafka cluster in the following ways:
i. It creates a Kafka producer (writer) for the Kafka partition that corresponds to the channel.
ii. It uses that producer to post a no-op CONNECT message to that partition.
iii. It creates a Kafka consumer (reader) for that partition.
If any of these steps fail, you can adjust the frequency with which they are repeated. Specifically
they will be re-attempted every Kafka.Retry.ShortInterval for a total of Kafka.Retry.
ShortTotal, and then every Kafka.Retry.LongInterval for a total of Kafka.Retry.
LongTotal until they succeed. Note that the orderer will be unable to write to or read from a channel
until all of the steps above have been completed successfully.
9. Set up the OSNs and Kafka cluster so that they communicate over SSL. (Optional step, but highly rec-
ommended.) Refer to the Confluent guide for the Kafka cluster side of the equation, and set the keys under
Kafka.TLS in orderer.yaml on every OSN accordingly.
10. Bring up the nodes in the following order: ZooKeeper ensemble, Kafka cluster, ordering service nodes.
1. Preferred message size. In Step 4 above (see Steps section) you can also set the preferred size of blocks
by setting the Orderer.Batchsize.PreferredMaxBytes key. Kafka offers higher throughput when
dealing with relatively small messages; aim for a value no bigger than 1 MiB.
2. Using environment variables to override settings. When using the sample Kafka and Zookeeper Docker
images provided with Fabric (see images/kafka and images/zookeeper respectively), you can over-
ride a Kafka broker or a ZooKeeper server’s settings by using environment variables. Replace the dots of
the configuration key with underscores — e.g. KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
will allow you to override the default value of unclean.leader.election.enable. The same ap-
plies to the OSNs for their local configuration, i.e. what can be set in orderer.yaml. For example
ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s allows you to override the default value for Orderer.
Kafka.Retry.ShortInterval.
Fabric uses the sarama client library and vendors a version of it that supports Kafka 0.10 to 1.0, yet is still known to
work with older versions.
Using the Kafka.Version key in orderer.yaml, you can configure which version of the Kafka protocol is
used to communicate with the Kafka cluster’s brokers. Kafka brokers are backward compatible with older protocol
versions. Because of a Kafka broker’s backward compatibility with older protocol versions, upgrading your Kafka
brokers to a new version does not require an update of the Kafka.Version key value, but the Kafka cluster might
suffer a performance penalty while using an older protocol version.
8.15.6 Debugging
Set environment variable FABRIC_LOGGING_SPEC to DEBUG and set Kafka.Verbose to true in orderer.
yaml .
Commands Reference
9.1 peer
9.1.1 Description
The peer command has five different subcommands, each of which allows administrators to perform a specific set of
tasks related to a peer. For example, you can use the peer channel subcommand to join a peer to a channel, or the
peer chaincode command to deploy a smart contract chaincode to a peer.
9.1.2 Syntax
Each subcommand has different options available, and these are described in their own dedicated topic. For brevity,
we often refer to a command (peer), a subcommand (channel), or subcommand option (fetch) simply as a
command.
If a subcommand is specified without an option, then it will return some high level help text as described in the
--help flag below.
9.1.3 Flags
Each peer subcommand has a specific set of flags associated with it, many of which are designated global because
they can be used in all subcommand options. These flags are described with the relevant peer subcommand.
The top level peer command has the following flag:
317
hyperledger-fabricdocs Documentation, Release master
• --help
Use --help to get brief help text for any peer command. The --help flag is very useful – it can be used to
get command help, subcommand help, and even option help.
For example
peer --help
peer channel --help
peer channel list --help
9.1.4 Usage
Usage:
peer channel join [flags]
Flags:
-b, --blockpath string Path to file containing genesis block
-h, --help help for join
Global Flags:
--cafile string Path to file containing PEM-encoded
˓→trusted certificate(s) for the ordering endpoint
This shows brief help syntax for the peer channel join command.
The peer chaincode command allows administrators to perform chaincode related operations on a peer, such as
installing, instantiating, invoking, packaging, querying, and upgrading chaincode.
9.2.1 Syntax
9.2.2 Flags
Each peer chaincode subcommand has both a set of flags specific to an individual subcommand, as well as a set
of global flags that relate to all peer chaincode subcommands. Not all subcommands would use these flags. For
instance, the query subcommand does not need the --orderer flag.
The individual flags are described with the relevant subcommand. The global flags are
• --cafile <string>
Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
• --certfile <string>
Path to file containing PEM-encoded X509 public key to use for mutual TLS communication with the orderer
endpoint
• --keyfile <string>
Path to file containing PEM-encoded private key to use for mutual TLS communication with the orderer endpoint
• -o or --orderer <string>
Ordering service endpoint specified as <hostname or IP address>:<port>
• --ordererTLSHostnameOverride <string>
The hostname override to use when validating the TLS connection to the orderer
• --tls
Use TLS when communicating with the orderer endpoint
• --transient <string>
Transient map of arguments in JSON encoding
Package the specified chaincode into a deployment spec and save it on the peer's path.
Usage:
peer chaincode install [flags]
Flags:
--connectionProfile string Connection profile that provides the necessary
˓→connection information for the network. Note: currently only supported for
Global Flags:
--cafile string Path to file containing PEM-encoded
˓→trusted certificate(s) for the ordering endpoint
Usage:
peer chaincode instantiate [flags]
Flags:
-C, --channelID string The channel on which this command should be
˓→executed
Global Flags:
--cafile string Path to file containing PEM-encoded
˓→trusted certificate(s) for the ordering endpoint
Invoke the specified chaincode. It will try to commit the endorsed transaction to the
˓→network.
Usage:
peer chaincode invoke [flags]
Flags:
-C, --channelID string The channel on which this command should be
˓→executed
˓→committed successfully
--waitForEventTimeout duration Time to wait for the event from each peer's
˓→deliver filtered service signifying that the 'invoke' transaction has been
Global Flags:
--cafile string Path to file containing PEM-encoded
˓→trusted certificate(s) for the ordering endpoint
Get the instantiated chaincodes in the channel if specify channel, or get installed
˓→chaincodes on the peer
Usage:
peer chaincode list [flags]
Flags:
-C, --channelID string The channel on which this command should be
˓→executed
Global Flags:
--cafile string Path to file containing PEM-encoded
˓→trusted certificate(s) for the ordering endpoint (continues on next page)
Usage:
peer chaincode package [flags]
Flags:
-s, --cc-package create CC deployment spec for owner endorsements
˓→instead of raw CC deployment spec
Global Flags:
--cafile string Path to file containing PEM-encoded
˓→trusted certificate(s) for the ordering endpoint
Get endorsed result of chaincode function call and print it. It won't generate
˓→transaction.
Usage:
peer chaincode query [flags]
Flags:
-C, --channelID string The channel on which this command should be
˓→executed
Global Flags:
--cafile string Path to file containing PEM-encoded
˓→trusted certificate(s) for the ordering endpoint
Usage:
peer chaincode signpackage [flags]
(continues on next page)
Flags:
-h, --help help for signpackage
Global Flags:
--cafile string Path to file containing PEM-encoded
˓→trusted certificate(s) for the ordering endpoint
Upgrade an existing chaincode with the specified one. The new chaincode will
˓→immediately replace the existing chaincode upon the transaction committed.
Usage:
peer chaincode upgrade [flags]
Flags:
-C, --channelID string The channel on which this command should be
˓→executed
Global Flags:
--cafile string Path to file containing PEM-encoded
˓→trusted certificate(s) for the ordering endpoint
Here are some examples of the peer chaincode instantiate command, which instantiates the chaincode
named mycc at version 1.0 on channel mychannel:
• Using the --tls and --cafile global flags to instantiate the chaincode in a network with TLS enabled:
export ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/
˓→tlsca.example.com-cert.pem
˓→'Org1MSP.peer','Org2MSP.peer')"
• Using only the command-specific options to instantiate the chaincode in a network with TLS disabled:
˓→peer')"
˓→com:7051 -c '{"Args":["invoke","a","b","10"]}'
.
.
.
2018-02-22 16:34:27.106 UTC [chaincodeCmd] chaincodeInvokeOrQuery -> DEBU 00a
˓→ESCC invoke result: version:1 response:<status:200 message:"OK" > payload:"\n
˓→#\3413\274\022Y\nE\022\024\n\004lscc\022\014\n\n\n\004mycc\022\002\010\003\022-
˓→\n\004mycc\022
˓→%\n\007\n\001a\022\002\010\003\n\007\n\001b\022\002\010\003\032\007\n\001a\032\00290\032\010\n
˓→"\013\022\004mycc\032\0031.0" endorsement:<endorser:"\n\007Org1MSP\022\262\006--
˓→---BEGIN CERTIFICATE-----\nMIICLjCCAdWgAwIBAgIRAJYomxY2cqHA/fbRnH5a/
˓→bwwCgYIKoZIzj0EAwIwczEL\nMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDVNhbiBG\ncmF
˓→/7JFDHATJXtLgJhkK5KosDdHuKLYbCqvge\n46u3AC16MZyJRvKBiw6jTTBLMA4GA1UdDwEB/
˓→wQEAwIHgDAMBgNVHRMBAf8EAjAA\nMCsGA1UdIwQkMCKAIN7dJR9dimkFtkus0R5pAOlRz5SA3FB5t8Eaxl9A7lkgMAoG\
˓→\342\350\344\231G&
˓→\237\n\244\375\302J\220l\302\345\210\335D\250y\253P\0214:\221e\332@\002
˓→\000\254\361\224\247\210\214L\277\370\222\213\217\301\r\341v\227\265\277\336\256^
˓→\217\336\005y*\321\023\025\367" >
Here you can see that the invoke was submitted successfully based on the log message:
A successful response indicates that the transaction was submitted for ordering successfully. The transaction
will then be added to a block and, finally, validated or invalidated by each peer on the channel.
˓→8cc2730fdafd0b28ef734eac12b29df5fc98ad98bdb1b7e0ef96265c3d893d61
You can see that the peer has installed a chaincode called mycc which is at version 1.0.
• Using the --instantiated in combination with the -C (channel ID) flag to list the chaincodes instantiated
on a channel.
peer chaincode list --instantiated -C mychannel
You can see that chaincode mycc at version 1.0 is instantiated on channel mychannel.
Here is an example of the peer chaincode package command, which packages the chaincode named mycc
at version 1.1, creates the chaincode deployment spec, signs the package using the local MSP, and outputs it as
ccpack.out:
peer chaincode package ccpack.out -n mycc -p github.com/hyperledger/fabric/examples/
˓→chaincode/go/chaincode_example02 -v 1.1 -s -S
.
.
.
2018-02-22 17:27:01.404 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 003
˓→Using default escc
.
.
.
2018-02-22 17:27:01.879 UTC [chaincodeCmd] chaincodePackage -> DEBU 011 Packaged
˓→chaincode into deployment spec of size <3426>, with args = [ccpack.out]
Here is an example of the peer chaincode query command, which queries the peer ledger for the chaincode
named mycc at version 1.0 for the value of variable a:
• You can see from the output that variable a had a value of 90 at the time of the query.
peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'
Query Result: 90
Here is an example of the peer chaincode signpackage command, which accepts an existing signed package
and creates a new one with signature of the local MSP appended to it.
Here is an example of the peer chaincode upgrade command, which upgrades the chaincode named mycc at
version 1.0 on channel mychannel to version 1.1, which contains a new variable c:
• Using the --tls and --cafile global flags to upgrade the chaincode in a network with TLS enabled:
export ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/
˓→tlsca.example.com-cert.pem
.
.
.
2018-02-22 18:26:31.433 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 003
˓→Using default escc
2018-02-22 18:26:31.435 UTC [chaincodeCmd] upgrade -> DEBU 006 Get upgrade
˓→proposal for chaincode <name:"mycc" version:"1.1" >
.
.
.
2018-02-22 18:26:46.687 UTC [chaincodeCmd] upgrade -> DEBU 009 endorse upgrade
˓→proposal, get response <status:200 message:"OK" payload:"\n\004mycc\022\0031.
˓→1\032\004escc\"\004vscc*,
˓→\022\014\022\n\010\001\022\002\010\000\022\002\010\001\032\r\022\013\n\007Org1MSP\020\003\032\
˓→\261g(^
˓→v\021\220\240\332\251\014\204V\210P\310o\231\271\036\301\022\032\205fC[|=\215\372\223\022
˓→\311b\025?
˓→\323N\343\325\032\005\365\236\001XKj\004E\351\007\247\265fu\305j\367\331\275\253\307R\032
˓→\014H#\014\272!#\345\306s\323\371\350\364\006.
˓→\000\356\230\353\270\263\215\217\303\256\220i^\277\305\214: \375\200zY\275\203}
˓→\375\244\205\035\340\226]l!uE\334\273\214\214\020\303\3474\360\014\234-
˓→\006\315B\031\022\010\022\006\010\001\022\002\010\000\032\r\022\013\n\007Org1MSP\020\001
˓→" >
.
.
.
2018-02-22 18:26:46.693 UTC [chaincodeCmd] upgrade -> DEBU 00c Get Signed envelope
2018-02-22 18:26:46.693 UTC [chaincodeCmd] chaincodeUpgrade -> DEBU 00d Send
˓→signed envelope to orderer
• Using only the command-specific options to upgrade the chaincode in a network with TLS disabled:
˓→'Org2MSP.peer')"
.
.
.
2018-02-22 18:28:31.433 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 003
˓→Using default escc
2018-02-22 18:28:31.435 UTC [chaincodeCmd] upgrade -> DEBU 006 Get upgrade
˓→proposal for chaincode <name:"mycc" version:"1.1" >
.
.
.
2018-02-22 18:28:46.687 UTC [chaincodeCmd] upgrade -> DEBU 009 endorse upgrade
˓→proposal, get response <status:200 message:"OK" payload:"\n\004mycc\022\0031.
˓→1\032\004escc\"\004vscc*,
˓→\022\014\022\n\010\001\022\002\010\000\022\002\010\001\032\r\022\013\n\007Org1MSP\020\003\032\
˓→\261g(^
˓→v\021\220\240\332\251\014\204V\210P\310o\231\271\036\301\022\032\205fC[|=\215\372\223\022
˓→\311b\025?
˓→\323N\343\325\032\005\365\236\001XKj\004E\351\007\247\265fu\305j\367\331\275\253\307R\032
˓→\014H#\014\272!#\345\306s\323\371\350\364\006.
˓→\000\356\230\353\270\263\215\217\303\256\220i^\277\305\214: \375\200zY\275\203}
˓→\375\244\205\035\340\226]l!uE\334\273\214\214\020\303\3474\360\014\234-
˓→\006\315B\031\022\010\022\006\010\001\022\002\010\000\032\r\022\013\n\007Org1MSP\020\001
˓→" >
.
.
.
2018-02-22 18:28:46.693 UTC [chaincodeCmd] upgrade -> DEBU 00c Get Signed envelope
2018-02-22 18:28:46.693 UTC [chaincodeCmd] chaincodeUpgrade -> DEBU 00d Send
˓→signed envelope to orderer
This work is licensed under a Creative Commons Attribution 4.0 International License.
The peer channel command allows administrators to perform channel related operations on a peer, such as joining
a channel or listing the channels to which a peer is joined.
9.3.1 Syntax
• join
• list
• signconfigtx
• update
Usage:
peer channel [command]
Available Commands:
create Create a channel
fetch Fetch a block
getinfo get blockchain information of a specified channel.
join Joins the peer to a channel.
list List of channels peer has joined.
signconfigtx Signs a configtx update.
update Send a configtx update.
Flags:
--cafile string Path to file containing PEM-encoded
˓→trusted certificate(s) for the ordering endpoint
Use "peer channel [command] --help" for more information about a command.
Usage:
peer channel create [flags]
Flags:
-c, --channelID string In case of a newChain command, the channel ID to create.
˓→It must be all lower case, less than 250 characters long and match the regular
˓→expression: [a-z][a-z0-9.-]*
Global Flags:
--cafile string Path to file containing PEM-encoded
˓→trusted certificate(s) for the ordering endpoint
Usage:
peer channel fetch <newest|oldest|config|(number)> [outputfile] [flags]
Flags:
-c, --channelID string In case of a newChain command, the channel ID to create.
˓→It must be all lower case, less than 250 characters long and match the regular
˓→expression: [a-z][a-z0-9.-]*
Global Flags:
--cafile string Path to file containing PEM-encoded
˓→trusted certificate(s) for the ordering endpoint
Usage:
peer channel getinfo [flags]
Flags:
-c, --channelID string In case of a newChain command, the channel ID to create.
˓→It must be all lower case, less than 250 characters long and match the regular
˓→expression: [a-z][a-z0-9.-]*
Global Flags:
--cafile string Path to file containing PEM-encoded
˓→trusted certificate(s) for the ordering endpoint
Usage:
peer channel join [flags]
Flags:
-b, --blockpath string Path to file containing genesis block
-h, --help help for join
Global Flags:
--cafile string Path to file containing PEM-encoded
˓→trusted certificate(s) for the ordering endpoint
Usage:
peer channel list [flags]
Flags:
-h, --help help for list
Global Flags:
--cafile string Path to file containing PEM-encoded
˓→trusted certificate(s) for the ordering endpoint
Signs the supplied configtx update file in place on the filesystem. Requires '-f'.
Usage:
peer channel signconfigtx [flags]
Flags:
-f, --file string Configuration transaction file generated by a tool such as
˓→configtxgen for submitting to orderer
Global Flags:
--cafile string Path to file containing PEM-encoded
˓→trusted certificate(s) for the ordering endpoint
Signs and sends the supplied configtx update file to the channel. Requires '-f', '-o',
˓→ '-c'.
Usage:
peer channel update [flags]
Flags:
-c, --channelID string In case of a newChain command, the channel ID to create.
˓→It must be all lower case, less than 250 characters long and match the regular
˓→expression: [a-z][a-z0-9.-]*
Global Flags:
--cafile string Path to file containing PEM-encoded
˓→trusted certificate(s) for the ordering endpoint
Here’s an example that uses the --orderer global flag on the peer channel create command.
• Create a sample channel mychannel defined by the configuration transaction contained in file ./
createchannel.txn. Use the orderer at orderer.example.com:7050.
2018-02-25 08:23:57.548 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and
˓→orderer connections initialized
2018-02-25 08:23:57.626 UTC [channelCmd] InitCmdFactory -> INFO 019 Endorser and
˓→orderer connections initialized
2018-02-25 08:23:57.834 UTC [channelCmd] readBlock -> INFO 020 Received block: 0
2018-02-25 08:23:57.835 UTC [main] main -> INFO 021 Exiting.....
Block 0 is returned indicating that the channel has been successfully created.
Here’s an example of the peer channel create command option.
• Create a new channel mychannel for the network, using the orderer at ip address orderer.example.
com:7050. The configuration update transaction required to create this channel is defined the file ./
ls -l
You can see that channel mychannel has been successfully created, as indicated in the output where block 0
(zero) is added to the blockchain for this channel and returned to the peer, where it is stored in the local directory
as mychannel.block.
Block zero is often called the genesis block as it provides the starting configuration for the channel. All sub-
sequent updates to the channel will be captured as configuration blocks on the channel’s blockchain, each of
which supersedes the previous configuration.
2018-02-25 13:10:16.137 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and
˓→orderer connections initialized
2018-02-25 13:10:16.144 UTC [channelCmd] readBlock -> INFO 00a Received block: 32
2018-02-25 13:10:16.145 UTC [main] main -> INFO 00b Exiting.....
ls -l
You can see that the retrieved block is number 32, and that the information has been written to the file
mychannel.block.
• Using the (block number) option to retrieve a specific block – in this case, block number 16 – and store it
in the default block file.
2018-02-25 13:46:50.296 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and
˓→orderer connections initialized
2018-02-25 13:46:50.302 UTC [channelCmd] readBlock -> INFO 00a Received block: 16
2018-02-25 13:46:50.302 UTC [main] main -> INFO 00b Exiting.....
ls -l
You can see that the retrieved block is number 16, and that the information has been written to the default file
mychannel_16.block.
For configuration blocks, the block file can be decoded using the configtxlator command. See this com-
mand for an example of decoded output. User transaction blocks can also be decoded, but a user program must
be written to do this.
2018-02-25 15:15:44.135 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and
˓→orderer connections initialized
˓→"f8lZXoAn3gF86zrFq7L1DzW2aKuabH9Ow6SIE5Y04a4="}
You can see that the latest block for channel mychannel is block 5. You can also see the cryptographic hashes
for the most recent blocks in the channel’s blockchain.
2018-02-25 12:25:26.511 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and
˓→orderer connections initialized
You can see that the peer has successfully made a request to join the channel.
2018-02-25 14:21:20.361 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and
˓→orderer connections initialized
(continues on next page)
ls -l
2018-02-25 18:16:44.456 GMT [channelCmd] InitCmdFactory -> INFO 001 Endorser and
˓→orderer connections initialized
ls -l
You can see that the peer has successfully signed the configuration transaction by the increase in the size of the
file updatechannel.tx from 284 bytes to 2180 bytes.
2018-02-23 06:32:11.569 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and
˓→orderer connections initialized
The peer version command displays the version information of the peer. It displays version, Commit SHA, Go
version, OS/architecture, and chaincode information. For example:
peer:
Version: 1.4.0
Commit SHA: 0efc897
Go version: go1.11.1
OS/Arch: linux/amd64
Chaincode:
Base Image Version: 0.4.14
Base Docker Namespace: hyperledger
Base Docker Label: org.hyperledger.fabric
Docker Namespace: hyperledger
9.4.1 Syntax
Usage:
peer version [flags]
Flags:
-h, --help help for version
This work is licensed under a Creative Commons Attribution 4.0 International License.
The peer logging subcommand allows administrators to dynamically view and configure the log levels of a peer.
9.5.1 Syntax
Usage:
peer logging [command]
Available Commands:
getlevel Returns the logging level of the requested logger.
getlogspec Returns the active log spec.
revertlevels Reverts the logging spec to the peer's spec at startup.
setlevel Adds the logger and log level to the current logging spec.
setlogspec Sets the logging spec.
Flags:
-h, --help help for logging
Use "peer logging [command] --help" for more information about a command.
Returns the logging level of the requested logger. Note: the logger name should
˓→exactly match the name that is displayed in the logs.
Usage:
peer logging getlevel <logger> [flags]
Flags:
-h, --help help for getlevel
Usage:
peer logging revertlevels [flags]
Flags:
-h, --help help for revertlevels
Adds the logger and log level to the current logging specification.
Usage:
peer logging setlevel <logger> <log level> [flags]
Flags:
-h, --help help for setlevel
2018-11-01 14:18:11.276 UTC [cli.logging] getLevel -> INFO 001 Current log level
˓→for logger 'peer': INFO
2018-11-01 14:21:03.591 UTC [cli.logging] getLogSpec -> INFO 001 Current logging
˓→spec: info
• To set the log level to ERROR for only the logger that exactly matches the supplied name, append a period to the
logger name:
2018-11-01 14:27:33.080 UTC [cli.logging] setLevel -> INFO 001 Log level set for
˓→logger name/prefix 'gossip.': ERROR
2018-11-01 14:32:12.871 UTC [cli.logging] setLogSpec -> INFO 001 Current logging
˓→spec set to: gossip=warning:msp=warning:info
Note: there is only one active logging spec. Any previous spec, including modules updated via ‘setlevel’, will
no longer be applicable.
2018-11-01 14:37:12.402 UTC [cli.logging] revertLevels -> INFO 001 Logging spec
˓→reverted to the peer's spec at startup.
This work is licensed under a Creative Commons Attribution 4.0 International License.
The peer node command allows an administrator to start a peer node or check the status of a peer node.
9.6.1 Syntax
Usage:
peer node start [flags]
Flags:
-h, --help help for start
-o, --orderer string Ordering service endpoint (default "orderer:7050")
--peer-chaincodedev Whether peer in chaincode development mode
Usage:
peer node status [flags]
Flags:
-h, --help help for status
starts a peer node in chaincode development mode. Normally chaincode containers are started and maintained by
peer. However in chaincode development mode, chaincode is built and started by the user. This mode is useful during
chaincode development phase for iterative development. See more information on development mode in the chaincode
tutorial.
This work is licensed under a Creative Commons Attribution 4.0 International License.
9.7 configtxgen
The configtxgen command allows users to create and inspect channel config related artifacts. The content of the
generated artifacts is dictated by the contents of configtx.yaml.
9.7.1 Syntax
The configtxgen tool has no sub-commands, but supports flags which can be set to accomplish a number of tasks.
9.7.2 configtxgen
Usage of configtxgen:
-asOrg string
Performs the config generation as a particular organization (by name), only
˓→including values in the write set that org (likely) has privilege to set
-channelID string
The channel ID to use in the configtx
-configPath string
The path containing the configuration to use (if set)
-inspectBlock string
Prints the configuration contained in the block at the specified path
-inspectChannelCreateTx string
Prints the configuration contained in the transaction at the specified path
-outputAnchorPeersUpdate string
Creates an config update to update an anchor peer (works only with the
˓→default channel creation, and only for the first update)
-outputBlock string
The path to write the genesis block to (if set)
-outputCreateChannelTx string
The path to write a channel creation configtx to (if set)
-printOrg string
Prints the definition of an organization as JSON. (useful for adding an org
˓→to a channel manually)
-profile string
The profile from configtx.yaml to use for generation. (default
˓→"SampleInsecureSolo")
-version
Show version information
9.7.3 Usage
Print the contents of a genesis block named genesis_block.pb to the screen as JSON.
Print the contents of a channel creation tx named create_chan_tx.pb to the screen as JSON.
Construct an organization definition based on the parameters such as MSPDir from configtx.yaml and print it as
JSON to the screen. (This output is useful for channel reconfiguration workflows, such as adding a member).
Output a configuration update transaction to anchor_peer_tx.pb which sets the anchor peers for organization
Org1 as defined in profile SampleSingleMSPChannelV1_1 based on configtx.yaml.
9.7.4 Configuration
The configtxgen tool’s output is largely controlled by the content of configtx.yaml. This file is searched for
at FABRIC_CFG_PATH and must be present for configtxgen to operate.
This configuration file may be edited, or, individual properties may be overridden by setting environment variables,
such as CONFIGTX_ORDERER_ORDERERTYPE=kafka.
For many configtxgen operations, a profile name must be supplied. Profiles are a way to express multi-
ple similar configurations in a single file. For instance, one profile might define a channel with 3 orgs, and an-
other might define one with 4 orgs. To accomplish this without the length of the file becoming burdensome,
configtx.yaml depends on the standard YAML feature of anchors and references. Base parts of the configu-
ration are tagged with an anchor like &OrdererDefaults and then merged into a profile with a reference like <<:
*OrdererDefaults. Note, when configtxgen is operating under a profile, environment variable overrides do
not need to include the profile prefix and may be referenced relative to the root element of the profile. For instance,
do not specify CONFIGTX_PROFILE_SAMPLEINSECURESOLO_ORDERER_ORDERERTYPE, instead simply omit
the profile specifics and use the CONFIGTX prefix followed by the elements relative to the profile name such as
CONFIGTX_ORDERER_ORDERERTYPE.
Refer to the sample configtx.yaml shipped with Fabric for all possible configuration options. You may find this
file in the config directory of the release artifacts tar, or you may find it under the sampleconfig folder if you
are building from source.
This work is licensed under a Creative Commons Attribution 4.0 International License.
9.8 configtxlator
The configtxlator command allows users to translate between protobuf and JSON versions of fabric data struc-
tures and create config updates. The command may either start a REST server to expose its functions over HTTP or
may be utilized directly as a command line tool.
9.8.1 Syntax
Flags:
--help Show context-sensitive help (also try --help-long and
--help-man).
(continues on next page)
Flags:
--help Show context-sensitive help (also try --help-long and
--help-man).
--type=TYPE The type of protobuf structure to encode to. For
example, 'common.Config'.
--input=/dev/stdin A file containing the JSON document.
--output=/dev/stdout A file to write the output to.
Flags:
--help Show context-sensitive help (also try --help-long and
--help-man).
--type=TYPE The type of protobuf structure to decode from. For
example, 'common.Config'.
--input=/dev/stdin A file containing the proto message.
--output=/dev/stdout A file to write the JSON document to.
Takes two marshaled common.Config messages and computes the config update which
transitions between the two.
Flags:
--help Show context-sensitive help (also try --help-long and
--help-man).
--original=ORIGINAL The original config message.
--updated=UPDATED The updated config message.
--channel_id=CHANNEL_ID The name of the channel for this update.
--output=/dev/stdout A file to write the JSON document to.
Flags:
--help Show context-sensitive help (also try --help-long and --help-man).
9.8.7 Examples
Decoding
Alternatively, after starting the REST server, the following curl command performs the same operation through the
REST API.
Encoding
Convert a JSON document for a policy from stdin to a file named policy.pb.
Alternatively, after starting the REST server, the following curl command performs the same operation through the
REST API.
Pipelines
Compute a config update from original_config.pb and modified_config.pb and decode it to JSON to
stdout.
Alternatively, after starting the REST server, the following curl commands perform the same operations through the
REST API.
˓→protolator/encode/common.ConfigUpdate"
The tool name is a portmanteau of configtx and translator and is intended to convey that the tool simply converts
between different equivalent data representations. It does not generate configuration. It does not submit or retrieve
configuration. It does not modify configuration itself, it simply provides some bijective operations between different
views of the configtx format.
There is no configuration file configtxlator nor any authentication or authorization facilities included for the
REST server. Because configtxlator does not have any access to data, key material, or other information which
might be considered sensitive, there is no risk to the owner of the server in exposing it to other clients. However,
because the data sent by a user to the REST server might be confidential, the user should either trust the administrator
of the server, run a local instance, or operate via the CLI.
This work is licensed under a Creative Commons Attribution 4.0 International License.
9.9 cryptogen
cryptogen is an utility for generating Hyperledger Fabric key material. It is provided as a means of preconfiguring
a network for testing purposes. It would normally not be used in the operation of a production network.
9.9.1 Syntax
Flags:
--help Show context-sensitive help (also try --help-long and --help-man).
Commands:
help [<command>...]
Show help.
generate [<flags>]
Generate key material
showtemplate
Show the default configuration template
version
(continues on next page)
extend [<flags>]
Extend existing network
Flags:
--help Show context-sensitive help (also try --help-long
and --help-man).
--output="crypto-config" The output directory in which to place artifacts
--config=CONFIG The configuration template to use
Flags:
--help Show context-sensitive help (also try --help-long and --help-man).
Flags:
--help Show context-sensitive help (also try --help-long and
--help-man).
--input="crypto-config" The input directory in which existing network place
--config=CONFIG The configuration template to use
Flags:
--help Show context-sensitive help (also try --help-long and --help-man).
9.9.7 Usage
Here’s an example using the different available flags on the cryptogen extend command.
org3.example.com
The discovery service has its own Command Line Interface (CLI) which uses a YAML configuration file to persist
properties such as certificate and private key paths, as well as MSP ID.
The discover command has the following subcommands:
• saveConfig
• peers
• config
• endorsers
And the usage of the command is shown below:
Flags:
--help Show context-sensitive help (also try --help-long and --
˓→help-man).
--tlsCert=TLSCERT (Optional) Sets the client TLS certificate file path that
˓→is used when the peer enforces client authentication
--tlsKey=TLSKEY (Optional) Sets the client TLS key file path that is used
˓→when the peer enforces client authentication
--userKey=USERKEY Sets the user's key file path that is used to sign
˓→messages sent to the peer
--MSP=MSP Sets the MSP ID of the user, which represents the CA(s)
˓→that issued its user certificate
Commands:
help [<command>...]
Show help.
peers [<flags>]
Discover peers
config [<flags>]
(continues on next page)
endorsers [<flags>]
Discover chaincode endorsers
saveConfig
Save the config passed by flags into the file specified by --configFile
Currently, to see peers in service discovery they need to have EXTERNAL_ENDPOINT to be configured for them.
Otherwise, Fabric assumes the peer should not be disclosed.
To define these endpoints, you need to specify them in the core.yaml of the peer, replacing the sample endpoint
below with the ones of your peer.
CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.example.com:7051
To persist the configuration, a config file name should be supplied via the flag --configFile, along with the
command saveConfig:
$ cat conf.yaml
version: 0
tlsconfig:
certpath: ""
keypath: ""
peercacertpath: /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→peerOrganizations/org1.example.com/users/[email protected]/tls/ca.crt
timeout: 0s
signerconfig:
mspid: Org1MSP
identitypath: /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→peerOrganizations/org1.example.com/users/[email protected]/msp/signcerts/
keypath: /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
˓→peerOrganizations/org1.example.com/users/[email protected]/msp/keystore/
˓→ea4f6a38ac7057b6fa9502c2f5f39f182e320f71f667749100fe7dd94c23ce43_sk
When the peer runs with TLS enabled, the discovery service on the peer requires the client to connect to it with mutual
TLS, which means it needs to supply a TLS certificate. The peer is configured by default to request (but not to verify)
client TLS certificates, so supplying a TLS certificate isn’t needed (unless the peer’s tls.clientAuthRequired
is set to true).
When the discovery CLI’s config file has a certificate path for peercacertpath, but the certpath and keypath
aren’t configured as in the above - the discovery CLI generates a self-signed TLS certificate and uses this to connect
to the peer.
When the peercacertpath isn’t configured, the discovery CLI connects without TLS , and this is highly not
recommended, as the information is sent over plaintext, un-encrypted.
The discoveryCLI acts as a discovery client, and it needs to be executed against a peer. This is done via specifying the
--server flag. In addition, the queries are channel-scoped, so the --channel flag must be used.
The only query that doesn’t require a channel is the local membership peer query, which by default can only be used
by administrators of the peer being queried.
The discover CLI supports all server-side queries:
• Peer membership query
• Configuration query
• Endorsers query
Let’s go over them and see how they should be invoked and parsed:
[
{
"MSPID": "Org2MSP",
"LedgerHeight": 5,
"Endpoint": "peer0.org2.example.com:7051",
"Identity": "-----BEGIN CERTIFICATE-----
˓→\nMIICKTCCAc+gAwIBAgIRANK4WBck5gKuzTxVQIwhYMUwCgYIKoZIzj0EAwIwczEL\nMAkGA1UEBhMCVVMxEzARBgNVBAgTCkN
˓→ecJNvdAV2zmSx5Sf2qospVAH1MYCHyudDEvkiRuBPgmCdOdwJsE0g+h\nz0nZdKq6/
˓→X+jTTBLMA4GA1UdDwEB/
˓→wQEAwIHgDAMBgNVHRMBAf8EAjAAMCsGA1Ud\nIwQkMCKAIFZMuZfUtY6n2iyxaVr3rl+x5lU0CdG9x7KAeYydQGTMMAoGCCqGSM
˓→LJ7j3I9NEPQ/B1BpnJP+UNPnGO2peVrM/
˓→mJ1nVgIgS1ZA\nA1tsxuDyllaQuHx2P+P9NDFdjXx5T08lZhxuWYM=\n-----END CERTIFICATE-----\n
˓→",
"Chaincodes": [
"mycc"
]
},
{
"MSPID": "Org2MSP",
"LedgerHeight": 5,
"Endpoint": "peer1.org2.example.com:7051",
"Identity": "-----BEGIN CERTIFICATE-----
˓→\nMIICKDCCAc+gAwIBAgIRALnNJzplCrYy4Y8CjZtqL7AwCgYIKoZIzj0EAwIwczEL\nMAkGA1UEBhMCVVMxEzARBgNVBAgTCkN
˓→YMn1hS6sM+bFDgkJKalG7s9Hg3URF0aGpy51R\nU+4F9Muo+XajTTBLMA4GA1UdDwEB/
˓→wQEAwIHgDAMBgNVHRMBAf8EAjAAMCsGA1Ud\nIwQkMCKAIFZMuZfUtY6n2iyxaVr3rl+x5lU0CdG9x7KAeYydQGTMMAoGCCqGSM
˓→ExunQ==\n-----END CERTIFICATE-----\n",
"Chaincodes": [
"mycc"
]
},
{
"MSPID": "Org1MSP",
(continues on next page)
˓→1Rg/ynSk\nNNItaMlaCDZOaQvxJEl6o3fqx1PVFlfXE4NarY3OO1N3YZI41hWWoXksSwJu/
˓→35S\nM7wMEzw+3KNNMEswDgYDVR0PAQH/BAQDAgeAMAwGA1UdEwEB/
˓→wQCMAAwKwYDVR0j\nBCQwIoAgcecTOxTes6rfgyxHH6KIW7hsRAw2bhP9ikCHkvtv/
˓→RcwCgYIKoZIzj0E\nAwIDSAAwRQIhAKiJEv79XBmr8gGY6kHrGL0L3sq95E7IsCYzYdAQHj+DAiBPcBTg\nRuA0/
˓→/Kq+3aHJ2T0KpKHqD3FfhZZolKDkcrkwQ==\n-----END CERTIFICATE-----\n",
"Chaincodes": [
"mycc"
]
},
{
"MSPID": "Org1MSP",
"LedgerHeight": 5,
"Endpoint": "peer1.org1.example.com:7051",
"Identity": "-----BEGIN CERTIFICATE-----
˓→\nMIICJzCCAc6gAwIBAgIQO7zMEHlMfRhnP6Xt65jwtDAKBggqhkjOPQQDAjBzMQsw\nCQYDVQQGEwJVUzETMBEGA1UECBMKQ2F
˓→Q2g\nRHw5rk3SYw+OMFw9jNbsJJyC5ttJRvc12Dn7lQ8ZR9hW1vLQ3NtqO/
˓→couccDJcHg\nt47iHBNadaNNMEswDgYDVR0PAQH/BAQDAgeAMAwGA1UdEwEB/
˓→wQCMAAwKwYDVR0j\nBCQwIoAgcecTOxTes6rfgyxHH6KIW7hsRAw2bhP9ikCHkvtv/
˓→RcwCgYIKoZIzj0E\nAwIDRwAwRAIgGHGtRVxcFVeMQr9yRlebs23OXEECNo6hNqd/
˓→4ChLwwoCIBFKFd6t\nlL5BVzVMGQyXWcZGrjFgl4+fDrwjmMe+jAfa\n-----END CERTIFICATE-----\n
˓→",
"Chaincodes": null
}
]
As seen, this command outputs a JSON containing membership information about all the peers in the channel that the
peer queried possesses.
The Identity that is returned is the enrollment certificate of the peer, and it can be parsed with a combination of
jq and openssl:
$ discover --configFile conf.yaml peers --channel mychannel --server peer0.org1.
˓→example.com:7051 | jq .[0].Identity | sed "s/\\\n/\n/g" | sed "s/\"//g" | openssl
˓→x509 -text -noout
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
55:e9:3f:97:94:d5:74:db:e2:d6:99:3c:01:24:be:bf
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=US, ST=California, L=San Francisco, O=org2.example.com, CN=ca.org2.
˓→example.com
Validity
Not Before: Jun 9 11:58:28 2018 GMT
Not After : Jun 6 11:58:28 2028 GMT
Subject: C=US, ST=California, L=San Francisco, OU=peer, CN=peer0.org2.example.
˓→com
˓→ keyid:8E:58:82:C9:0A:11:10:A9:0B:93:03:EE:A0:54:42:F4:A3:EF:11:4C:82:B6:F9:CE:10:A2:1E:24:AB:13:82:
The configuration query returns a mapping from MSP IDs to orderer endpoints, as well as the FabricMSPConfig
which can be used to verify all peer and orderer nodes by the SDK:
$ discover --configFile conf.yaml config --channel mychannel --server peer0.org1.
˓→example.com:7051
{
"msps": {
"OrdererOrg": {
"name": "OrdererMSP",
"root_certs": [
˓→ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNMekNDQWRhZ0F3SUJBZ0lSQU1pWkxUb3RmMHR6VTRzNUdIdkQ0UjR3Q2
˓→ "
],
"admins": [
˓→ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNDVENDQWJDZ0F3SUJBZ0lRR2wzTjhaSzRDekRRQmZqYVpwMVF5VEFLQm
˓→ "
],
"crypto_config": {
"signature_hash_family": "SHA2",
"identity_identifier_hash_function": "SHA256"
},
"tls_root_certs": [
˓→ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNORENDQWR1Z0F3SUJBZ0lRZDdodzFIaHNZTXI2a25ETWJrZThTakFLQm
˓→ "
]
},
"Org1MSP": {
"name": "Org1MSP",
"root_certs": [
˓→ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNSRENDQWVxZ0F3SUJBZ0lSQU1nN2VETnhwS0t0ZGl0TDRVNDRZMUl3Q2
(continues on next page)
˓→ "
˓→ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNLakNDQWRDZ0F3SUJBZ0lRRTRFK0tqSHgwdTlzRSsxZUgrL1dOakFLQm
˓→ "
],
"crypto_config": {
"signature_hash_family": "SHA2",
"identity_identifier_hash_function": "SHA256"
},
"tls_root_certs": [
˓→"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNTVENDQWUrZ0F3SUJBZ0lRZlRWTE9iTENVUjdxVEY3Z283UXgvakFLQm
˓→"
],
"fabric_node_ous": {
"enable": true,
"client_ou_identifier": {
"certificate":
˓→"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNSRENDQWVxZ0F3SUJBZ0lSQU1nN2VETnhwS0t0ZGl0TDRVNDRZMUl3Q2
˓→",
"organizational_unit_identifier": "client"
},
"peer_ou_identifier": {
"certificate":
˓→"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNSRENDQWVxZ0F3SUJBZ0lSQU1nN2VETnhwS0t0ZGl0TDRVNDRZMUl3Q2
˓→",
"organizational_unit_identifier": "peer"
}
}
},
"Org2MSP": {
"name": "Org2MSP",
"root_certs": [
˓→ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNSRENDQWVxZ0F3SUJBZ0lSQUx2SWV2KzE4Vm9LZFR2V1RLNCtaZ2d3Q2
˓→ "
],
"admins": [
˓→ "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNLVENDQWRDZ0F3SUJBZ0lRU1lpeE1vdmpoM1N2c25WMmFUOXl1REFLQm
˓→ "
],
"crypto_config": {
"signature_hash_family": "SHA2",
"identity_identifier_hash_function": "SHA256"
},
"tls_root_certs": [
˓→"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNTakNDQWZDZ0F3SUJBZ0lSQUtoUFFxUGZSYnVpSktqL0JRanQ3RXN3Q2
˓→"
],
"fabric_node_ous": {
"enable": true,
"client_ou_identifier": {
"certificate":
˓→"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNSRENDQWVxZ0F3SUJBZ0lSQUx2SWV2KzE4Vm9LZFR2V1RLNCtaZ2d3Q2
˓→",
"organizational_unit_identifier": "peer"
}
}
},
"Org3MSP": {
"name": "Org3MSP",
"root_certs": [
"CgJPVQoEUm9sZQoMRW5yb2xsbWVudElEChBSZXZvY2F0aW9uSGFuZGxlEkQKIKoEXcq/
˓→psdYnMKCiT79N+dS1hM8k+SuzU1blOgTuN++EiBe2m3E+FjWLuQGMNRGRrEVTMqTvC4A/
˓→5jvCLv2ja1sZxpECiDBbI0kwetxAwFzHwb1hi8TlkGW3OofvuVzfFt9VlewcRIgyvsxG5/
˓→THdWyKJTdNx8Gle2hoCbVF0Y1/
˓→DQESBjGOGciRAog25fMyWps+FLOjzj1vIsGUyO457ri3YMvmUcycIH2FvQSICTtzaFvSPUiDtNtAVz+uetuB9kfmjUdUSQxjyXU
˓→uaJMuVph7Dy/
˓→icgnAtVYHShET41O0Eh3Q5BIgy5q9VMQrch9VW5yajhY8dH1uA593gKd5kBqGdLfiXzAiRAogAnUYq/
˓→kwKzFfmIm/
˓→W4nZxi1kjG2C8NRjsYYBkeAOQ6wSIGyX5GGmwgvxgXXehNWBfijyNIJALGRVhO8YtBqr+vnrKogBCiDHR1XQsDbpcBoZFJ09V97
˓→wwC+tq3oBIgSWT/
˓→peiO2BI0DecypKfgMpVR8DWXl8ZHSrPISsL3Mc8aINem9+BOezLwFKCbtVH1KAHIRLyyiNP+TkIKW6x9RkThIiAbIJCYU6O02EB
˓→1lHxV0vtWdIsKCTLx2EZmDJECiCPXeyUyFzPS3iFv8CQUOLCPZxf6buZS5JlM6EE/
˓→gCRaxIgmF9GKPLLmEoA77+AU3J8Iwnu9pBxnaHtUlyf/F9p30c6RAogG7ENKWlOZ4aF0HprqXAjl++Iao7/
˓→iE8xeVcKRlmfq1ASIGtmmavDAVS2bw3zClQd4ZBD2DrqCBO9NPOcLNB0IWeIQiCjxTdbmcuBNINZYWe+5fWyI1oY9LavKzDVkdh
˓→PRAmBaeTQLXdbMxIthxM2gw+Zkc5+IJEWX"
],
"intermediate_certs": [
"CtgCCkQKIP0UVivtH8NlnRNrZuuu6jpaj2ZbEB4/
˓→secGS57MfbINEiDSJweLUMIQSW12jugBQG81lIQflJWvi7vi925u+PU/
˓→+xJECiDgOGdNbAiGSoHmTjKhT22fqUqYLIVh+JBHetm4kF4skhIg9XTWRkUqtsfYKENzPgm7ZUSmCHNF8xH7Vnhuc1EpAUgaINw
˓→cIiCnlRj+mfNVAJGKthLgQBB/
˓→JKM14NbUeutyJtTgrmDDiCogme25qGvxJfgQNnzldMMicVyiI6YMfnoThAUyqsTzyXkqIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
˓→El+BA2fOV0wyJxXKIjpgx+ehOEBTKqxPPJeSogAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAESIFYUenRvjbmEh+37Y
˓→gwzULTJbCAoVg9XfCiROs4cU5oSv4Q80iYWtonAnvsSIE6mYFdzisBU21rhxjfYE7kk3Xjih9A1idJp7TSjfmorGiBwIEbnxUKj
˓→yIgBVTjvNOIwpBC7qZJKX6yn4tMvoCCGpiz4BKBEUqtBJsaZzBlAjBwZ4WXYOttkhsNA2r94gBfLUdx/
˓→4VhW4hwUImcztlau1T14UlNzJolCNkdiLc9CqsCMQD6OBkgDWGq9UlhkK9dJBzU+RElcZdSfVV1hDbbqt+lFRWOzzEkZ+BXCR1k
˓→"
],
"admins": [
˓→ "LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUhZd0VBWUhLb1pJemowQ0FRWUZLNEVFQUNJRFlnQUVUYk13SEZteEpEMWR3Sj
˓→ "
]
}
},
"orderers": {
"OrdererOrg": {
"endpoint": [
{
"host": "orderer.example.com",
"port": 7050
}
]
}
(continues on next page)
It’s important to note that the certificates here are base64 encoded, and thus should decoded in a manner similar to the
following:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
c8:99:2d:3a:2d:7f:4b:73:53:8b:39:18:7b:c3:e1:1e
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=US, ST=California, L=San Francisco, O=example.com, CN=ca.example.com
Validity
Not Before: Jun 9 11:58:28 2018 GMT
Not After : Jun 6 11:58:28 2028 GMT
Subject: C=US, ST=California, L=San Francisco, O=example.com, CN=ca.example.
˓→com
˓→ 60:9D:F2:30:26:CE:8F:65:81:41:AD:96:15:0E:24:8D:A0:9D:C5:79:C1:17:BF:FE:E5:1B:FB:75:50:10:A6:4C
Signature Algorithm: ecdsa-with-SHA256
30:44:02:20:3d:e1:a7:6c:99:3f:87:2a:36:44:51:98:37:11:
d8:a0:47:7a:33:ff:30:c1:09:a6:05:ec:b0:53:53:39:c1:0e:
02:20:6b:f4:1d:48:e0:72:e4:c2:ef:b0:84:79:d4:2e:c2:c5:
1b:6f:e4:2f:56:35:51:18:7d:93:51:86:05:84:ce:1f
To query for the endorsers of a chaincode call, additional flags need to be supplied:
• The --chaincode flag is mandatory and it provides the chaincode name(s). To query for a chaincode-to-
chaincode invocation, one needs to repeat the --chaincode flag with all the chaincodes.
• The --collection is used to specify private data collections that are expected to used by the chaincode(s).
To map from thechaincodes passed via --chaincode to the collections, the following syntax should be used:
collection=CC:Collection1,Collection2,....
For example, to query for a chaincode invocation that results in both cc1 and cc2 to be invoked, as well as
writes to private data collection col1 by cc2, one needs to specify: --chaincode=cc1 --chaincode=cc2
--collection=cc2:col1
Below is the output of an endorsers query for chaincode mycc when the endorsement policy is AND('Org1.peer',
'Org2.peer'):
[
{
"Chaincode": "mycc",
"EndorsersByGroups": {
"G0": [
{
"MSPID": "Org1MSP",
"LedgerHeight": 5,
"Endpoint": "peer0.org1.example.com:7051",
"Identity": "-----BEGIN CERTIFICATE-----
˓→\nMIICKDCCAc+gAwIBAgIRANTiKfUVHVGnrYVzEy1ZSKIwCgYIKoZIzj0EAwIwczEL\nMAkGA1UEBhMCVVMxEzARBgNVBAgTCkN
˓→k\n/CtORCDPQ02jTTBLMA4GA1UdDwEB/
˓→wQEAwIHgDAMBgNVHRMBAf8EAjAAMCsGA1Ud\nIwQkMCKAIOBdQLF+cMWa6e1p2CpOEx7SHUinzVvd55hLm7w6v72oMAoGCCqGSM
˓→zwD08t7hJxNe8MwgP8/48fAiBiC0cr\nu99oLsRNCFB7R3egyKg1YYao0KWTrr1T+rK9Bg==\n-----END
˓→CERTIFICATE-----\n"
}
],
"G1": [
{
"MSPID": "Org2MSP",
"LedgerHeight": 5,
"Endpoint": "peer1.org2.example.com:7051",
"Identity": "-----BEGIN CERTIFICATE-----
˓→\nMIICKDCCAc+gAwIBAgIRAIs6fFxk4Y5cJxSwTjyJ9A8wCgYIKoZIzj0EAwIwczEL\nMAkGA1UEBhMCVVMxEzARBgNVBAgTCkN
˓→cq\n0cGrMKR93vKjTTBLMA4GA1UdDwEB/
˓→wQEAwIHgDAMBgNVHRMBAf8EAjAAMCsGA1Ud\nIwQkMCKAII5YgskKERCpC5MD7qBUQvSj7xFMgrb5zhCiHiSrE4KgMAoGCCqGSM
˓→OidQ2SBR7OZyMAzgXc5nAabWZpdkuQ==\n-----END CERTIFICATE-----\n"
},
{
"MSPID": "Org2MSP",
"LedgerHeight": 5,
"Endpoint": "peer0.org2.example.com:7051",
"Identity": "-----BEGIN CERTIFICATE-----\nMIICJzCCAc6gAwIBAgIQVek/
˓→l5TVdNvi1pk8ASS+vzAKBggqhkjOPQQDAjBzMQsw\nCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMNU
˓→BAQDAgeAMAwGA1UdEwEB/
˓→wQCMAAwKwYDVR0j\nBCQwIoAgjliCyQoREKkLkwPuoFRC9KPvEUyCtvnOEKIeJKsTgqAwCgYIKoZIzj0E\nAwIDRwAwRAIgKT9V
˓→yu/CH9yDajGDlYIHI9GkN0MPNWAaom\n-----END CERTIFICATE-----\n"
}
]
},
"Layouts": [
{
"quantities_by_group": {
"G0": 1,
"G1": 1
}
(continues on next page)
It is possible to execute the discovery CLI without having a configuration file, and just passing all needed configuration
as commandline flags. The following is an example of a local peer membership query which loads administrator
credentials:
$ discover --peerTLSCA tls/ca.crt --userKey msp/keystore/
˓→cf31339d09e8311ac9ca5ed4e27a104a7f82f1e5904b3296a170ba4725ffde0d_sk --userCert msp/
[
{
"MSPID": "Org1MSP",
"Endpoint": "peer1.org1.example.com:7051",
"Identity": "-----BEGIN CERTIFICATE-----
˓→\nMIICJzCCAc6gAwIBAgIQO7zMEHlMfRhnP6Xt65jwtDAKBggqhkjOPQQDAjBzMQsw\nCQYDVQQGEwJVUzETMBEGA1UECBMKQ2F
˓→Q2g\nRHw5rk3SYw+OMFw9jNbsJJyC5ttJRvc12Dn7lQ8ZR9hW1vLQ3NtqO/
˓→couccDJcHg\nt47iHBNadaNNMEswDgYDVR0PAQH/BAQDAgeAMAwGA1UdEwEB/
˓→wQCMAAwKwYDVR0j\nBCQwIoAgcecTOxTes6rfgyxHH6KIW7hsRAw2bhP9ikCHkvtv/
˓→RcwCgYIKoZIzj0E\nAwIDRwAwRAIgGHGtRVxcFVeMQr9yRlebs23OXEECNo6hNqd/
˓→4ChLwwoCIBFKFd6t\nlL5BVzVMGQyXWcZGrjFgl4+fDrwjmMe+jAfa\n-----END CERTIFICATE-----\n
˓→",
},
{
"MSPID": "Org1MSP",
"Endpoint": "peer0.org1.example.com:7051",
"Identity": "-----BEGIN CERTIFICATE-----
˓→\nMIICKDCCAc6gAwIBAgIQP18LeXtEXGoN8pTqzXTHZTAKBggqhkjOPQQDAjBzMQsw\nCQYDVQQGEwJVUzETMBEGA1UECBMKQ2F
˓→1Rg/ynSk\nNNItaMlaCDZOaQvxJEl6o3fqx1PVFlfXE4NarY3OO1N3YZI41hWWoXksSwJu/
˓→35S\nM7wMEzw+3KNNMEswDgYDVR0PAQH/BAQDAgeAMAwGA1UdEwEB/
˓→wQCMAAwKwYDVR0j\nBCQwIoAgcecTOxTes6rfgyxHH6KIW7hsRAw2bhP9ikCHkvtv/
˓→RcwCgYIKoZIzj0E\nAwIDSAAwRQIhAKiJEv79XBmr8gGY6kHrGL0L3sq95E7IsCYzYdAQHj+DAiBPcBTg\nRuA0/
˓→/Kq+3aHJ2T0KpKHqD3FfhZZolKDkcrkwQ==\n-----END CERTIFICATE-----\n",
},
{
"MSPID": "Org2MSP",
"Endpoint": "peer0.org2.example.com:7051",
"Identity": "-----BEGIN CERTIFICATE-----
˓→\nMIICKTCCAc+gAwIBAgIRANK4WBck5gKuzTxVQIwhYMUwCgYIKoZIzj0EAwIwczEL\nMAkGA1UEBhMCVVMxEzARBgNVBAgTCkN
˓→ecJNvdAV2zmSx5Sf2qospVAH1MYCHyudDEvkiRuBPgmCdOdwJsE0g+h\nz0nZdKq6/
˓→X+jTTBLMA4GA1UdDwEB/
˓→wQEAwIHgDAMBgNVHRMBAf8EAjAAMCsGA1Ud\nIwQkMCKAIFZMuZfUtY6n2iyxaVr3rl+x5lU0CdG9x7KAeYydQGTMMAoGCCqGSM
˓→LJ7j3I9NEPQ/B1BpnJP+UNPnGO2peVrM/
˓→mJ1nVgIgS1ZA\nA1tsxuDyllaQuHx2P+P9NDFdjXx5T08lZhxuWYM=\n-----END CERTIFICATE-----\n
˓→",
},
{
"MSPID": "Org2MSP",
"Endpoint": "peer1.org2.example.com:7051",
"Identity": "-----BEGIN CERTIFICATE-----
˓→\nMIICKDCCAc+gAwIBAgIRALnNJzplCrYy4Y8CjZtqL7AwCgYIKoZIzj0EAwIwczEL\nMAkGA1UEBhMCVVMxEzARBgNVBAgTCkN
(continues on next page)
˓→YMn1hS6sM+bFDgkJKalG7s9Hg3URF0aGpy51R\nU+4F9Muo+XajTTBLMA4GA1UdDwEB/
˓→wQEAwIHgDAMBgNVHRMBAf8EAjAAMCsGA1Ud\nIwQkMCKAIFZMuZfUtY6n2iyxaVr3rl+x5lU0CdG9x7KAeYydQGTMMAoGCCqGSM
The Hyperledger Fabric CA is a Certificate Authority (CA) for Hyperledger Fabric. The commands available for the
fabric-ca client and fabric-ca server are described in the links below.
The fabric-ca-client command allows you to manage identities (including attribute management) and certificates (in-
cluding renewal and revocation).
More information on fabric-ca-client commands can be found here.
The fabric-ca-server command allows you to initialize and start a server process which may host one or more certificate
authorities.
More information on fabric-ca-server commands can be found here.
Architecture Reference
Note: This document represents the initial architectural proposal for Hyperledger Fabric v1.0. While the Hyperledger
Fabric implementation has conceptually followed from the architectural proposal, some details have been altered dur-
ing the implementation. The initial architectural proposal is presented as originally prepared. For a more technically
accurate representation of the architecture, please see Hyperledger Fabric: A Distributed Operating System for Per-
missioned Blockchains.
361
hyperledger-fabricdocs Documentation, Release master
The blockchain is a distributed system consisting of many nodes that communicate with each other. The blockchain
runs programs called chaincode, holds state and ledger data, and executes transactions. The chaincode is the central
element as transactions are operations invoked on the chaincode. Transactions have to be “endorsed” and only endorsed
transactions may be committed and have an effect on the state. There may exist one or more special chaincodes for
management functions and parameters, collectively called system chaincodes.
1.1. Transactions
1.2.1. State
The latest state of the blockchain (or, simply, state) is modeled as a versioned key-value store (KVS), where keys are
names and values are arbitrary blobs. These entries are manipulated by the chaincodes (applications) running on the
blockchain through put and get KVS-operations. The state is stored persistently and updates to the state are logged.
Notice that versioned KVS is adopted as state model, an implementation may use actual KVSs, but also RDBMSs or
any other solution.
More formally, state s is modeled as an element of a mapping K -> (V X N), where:
• K is a set of keys
• V is a set of values
• N is an infinite ordered set of version numbers. Injective function next: N -> N takes an element of N and
returns the next version number.
Both V and N contain a special element (empty type), which is in case of N the lowest element. Initially all keys are
mapped to (, ). For s(k)=(v,ver) we denote v by s(k).value, and ver by s(k).version.
KVS operations are modeled as follows:
• put(k,v) for k K and v V, takes the blockchain state s and changes it to s' such that s'(k)=(v,
next(s(k).version)) with s'(k')=s(k') for all k'!=k.
• get(k) returns s(k).
1.2.2 Ledger
Ledger provides a verifiable history of all successful state changes (we talk about valid transactions) and unsuccessful
attempts to change state (we talk about invalid transactions), occurring during the operation of the system.
Ledger is constructed by the ordering service (see Sec 1.3.3) as a totally ordered hashchain of blocks of (valid or
invalid) transactions. The hashchain imposes the total order of blocks in a ledger and each block contains an array of
totally ordered transactions. This imposes total order across all transactions.
Ledger is kept at all peers and, optionally, at a subset of orderers. In the context of an orderer we refer to the Ledger
as to OrdererLedger, whereas in the context of a peer we refer to the ledger as to PeerLedger. PeerLedger
differs from the OrdererLedger in that peers locally maintain a bitmask that tells apart valid transactions from
invalid ones (see Section XX for more details).
Peers may prune PeerLedger as described in Section XX (post-v1 feature). Orderers maintain OrdererLedger
for fault-tolerance and availability (of the PeerLedger) and may decide to prune it at anytime, provided that prop-
erties of the ordering service (see Sec. 1.3.3) are maintained.
The ledger allows peers to replay the history of all transactions and to reconstruct the state. Therefore, state as
described in Sec 1.2.1 is an optional datastructure.
1.3. Nodes
Nodes are the communication entities of the blockchain. A “node” is only a logical function in the sense that multiple
nodes of different types can run on the same physical server. What counts is how nodes are grouped in “trust domains”
and associated to logical entities that control them.
There are three types of nodes:
1. Client or submitting-client: a client that submits an actual transaction-invocation to the endorsers, and broad-
casts transaction-proposals to the ordering service.
2. Peer: a node that commits transactions and maintains the state and a copy of the ledger (see Sec, 1.2). Besides,
peers can have a special endorser role.
3. Ordering-service-node or orderer: a node running the communication service that implements a delivery
guarantee, such as atomic or total order broadcast.
The types of nodes are explained next in more detail.
1.3.1. Client
The client represents the entity that acts on behalf of an end-user. It must connect to a peer for communicating with
the blockchain. The client may connect to any peer of its choice. Clients create and thereby invoke transactions.
As detailed in Section 2, clients communicate with both peers and the ordering service.
1.3.2. Peer
A peer receives ordered state updates in the form of blocks from the ordering service and maintain the state and the
ledger.
Peers can additionally take up a special role of an endorsing peer, or an endorser. The special function of an endors-
ing peer occurs with respect to a particular chaincode and consists in endorsing a transaction before it is committed.
Every chaincode may specify an endorsement policy that may refer to a set of endorsing peers. The policy defines the
necessary and sufficient conditions for a valid transaction endorsement (typically a set of endorsers’ signatures), as de-
scribed later in Sections 2 and 3. In the special case of deploy transactions that install new chaincode the (deployment)
endorsement policy is specified as an endorsement policy of the system chaincode.
The orderers form the ordering service, i.e., a communication fabric that provides delivery guarantees. The ordering
service can be implemented in different ways: ranging from a centralized service (used e.g., in development and
testing) to distributed protocols that target different network and node fault models.
Ordering service provides a shared communication channel to clients and peers, offering a broadcast service for mes-
sages containing transactions. Clients connect to the channel and may broadcast messages on the channel which are
then delivered to all peers. The channel supports atomic delivery of all messages, that is, message communication with
total-order delivery and (implementation specific) reliability. In other words, the channel outputs the same messages
to all connected peers and outputs them to all peers in the same logical order. This atomic communication guaran-
tee is also called total-order broadcast, atomic broadcast, or consensus in the context of distributed systems. The
communicated messages are the candidate transactions for inclusion in the blockchain state.
Partitioning (ordering service channels). Ordering service may support multiple channels similar to the topics of
a publish/subscribe (pub/sub) messaging system. Clients can connect to a given channel and can then send messages
and obtain the messages that arrive. Channels can be thought of as partitions - clients connecting to one channel are
unaware of the existence of other channels, but clients may connect to multiple channels. Even though some ordering
service implementations included with Hyperledger Fabric support multiple channels, for simplicity of presentation,
in the rest of this document, we assume ordering service consists of a single channel/topic.
Ordering service API. Peers connect to the channel provided by the ordering service, via the interface provided by
the ordering service. The ordering service API consists of two basic operations (more generally asynchronous events):
TODO add the part of the API for fetching particular blocks under client/peer specified sequence numbers.
• broadcast(blob): a client calls this to broadcast an arbitrary message blob for dissemination over the
channel. This is also called request(blob) in the BFT context, when sending a request to a service.
• deliver(seqno, prevhash, blob): the ordering service calls this on the peer to deliver the message
blob with the specified non-negative integer sequence number (seqno) and hash of the most recently deliv-
ered blob (prevhash). In other words, it is an output event from the ordering service. deliver() is also
sometimes called notify() in pub-sub systems or commit() in BFT systems.
Ledger and block formation. The ledger (see also Sec. 1.2.2) contains all data output by the ordering service. In a
nutshell, it is a sequence of deliver(seqno, prevhash, blob) events, which form a hash chain according
to the computation of prevhash described before.
Most of the time, for efficiency reasons, instead of outputting individual transactions (blobs), the ordering service will
group (batch) the blobs and output blocks within a single deliver event. In this case, the ordering service must
impose and convey a deterministic ordering of the blobs within each block. The number of blobs in a block may be
chosen dynamically by an ordering service implementation.
In the following, for ease of presentation, we define ordering service properties (rest of this subsection) and explain the
workflow of transaction endorsement (Section 2) assuming one blob per deliver event. These are easily extended
to blocks, assuming that a deliver event for a block corresponds to a sequence of individual deliver events for
each blob within a block, according to the above mentioned deterministic ordering of blobs within a block.
Ordering service properties
The guarantees of the ordering service (or atomic-broadcast channel) stipulate what happens to a broadcasted message
and what relations exist among delivered messages. These guarantees are as follows:
1. Safety (consistency guarantees): As long as peers are connected for sufficiently long periods of time to the
channel (they can disconnect or crash, but will restart and reconnect), they will see an identical series of delivered
(seqno, prevhash, blob) messages. This means the outputs (deliver() events) occur in the same
order on all peers and according to sequence number and carry identical content (blob and prevhash) for the
same sequence number. Note this is only a logical order, and a deliver(seqno, prevhash, blob) on
one peer is not required to occur in any real-time relation to deliver(seqno, prevhash, blob) that
outputs the same message at another peer. Put differently, given a particular seqno, no two correct peers deliver
different prevhash or blob values. Moreover, no value blob is delivered unless some client (peer) actually
called broadcast(blob) and, preferably, every broadcasted blob is only delivered once.
Furthermore, the deliver() event contains the cryptographic hash of the data in the previous deliver()
event (prevhash). When the ordering service implements atomic broadcast guarantees, prevhash is the
cryptographic hash of the parameters from the deliver() event with sequence number seqno-1. This
establishes a hash chain across deliver() events, which is used to help verify the integrity of the order-
ing service output, as discussed in Sections 4 and 5 later. In the special case of the first deliver() event,
prevhash has a default value.
2. Liveness (delivery guarantee): Liveness guarantees of the ordering service are specified by a ordering service
implementation. The exact guarantees may depend on the network and node fault model.
In principle, if the submitting client does not fail, the ordering service should guarantee that every correct peer
that connects to the ordering service eventually delivers every submitted transaction.
To summarize, the ordering service ensures the following properties:
• Agreement. For any two events at correct peers deliver(seqno, prevhash0, blob0) and
deliver(seqno, prevhash1, blob1) with the same seqno, prevhash0==prevhash1 and
blob0==blob1;
• Hashchain integrity. For any two events at correct peers deliver(seqno-1, prevhash0, blob0) and
deliver(seqno, prevhash, blob), prevhash = HASH(seqno-1||prevhash0||blob0).
• No skipping. If an ordering service outputs deliver(seqno, prevhash, blob) at a correct peer p,
such that seqno>0, then p already delivered an event deliver(seqno-1, prevhash0, blob0).
• No creation. Any event deliver(seqno, prevhash, blob) at a correct peer must be preceded by a
broadcast(blob) event at some (possibly distinct) peer;
• No duplication (optional, yet desirable). For any two events broadcast(blob) and broadcast(blob'),
when two events deliver(seqno0, prevhash0, blob) and deliver(seqno1,
prevhash1, blob') occur at correct peers and blob == blob', then seqno0==seqno1 and
prevhash0==prevhash1.
• Liveness. If a correct client invokes an event broadcast(blob) then every correct peer “eventually” issues
an event deliver(*, *, blob), where * denotes an arbitrary value.
2.1. The client creates a transaction and sends it to endorsing peers of its choice
To invoke a transaction, the client sends a PROPOSE message to a set of endorsing peers of its choice (possibly not
at the same time - see Sections 2.1.2. and 2.3.). The set of endorsing peers for a given chaincodeID is made
available to client via peer, which in turn knows the set of endorsing peers from endorsement policy (see Section 3).
For example, the transaction could be sent to all endorsers of a given chaincodeID. That said, some endorsers
could be offline, others may object and choose not to endorse the transaction. The submitting client tries to satisfy the
policy expression with the endorsers available.
In the following, we first detail PROPOSE message format and then discuss possible patterns of interaction between
submitting client and endorsers.
The format of a PROPOSE message is <PROPOSE,tx,[anchor]>, where tx is a mandatory and anchor optional
argument explained in the following.
• tx=<clientID,chaincodeID,txPayload,timestamp,clientSig>, where
– clientID is an ID of the submitting client,
– chaincodeID refers to the chaincode to which the transaction pertains,
– txPayload is the payload containing the submitted transaction itself,
– timestamp is a monotonically increasing (for every new transaction) integer maintained by the client,
– clientSig is signature of a client on other fields of tx.
The details of txPayload will differ between invoke transactions and deploy transactions (i.e., invoke transac-
tions referring to a deploy-specific system chaincode). For an invoke transaction, txPayload would consist
of two fields
– txPayload = <operation, metadata>, where
The client decides on the sequence of interaction with endorsers. For example, a client would typically send
<PROPOSE, tx> (i.e., without the anchor argument) to a single endorser, which would then produce the ver-
sion dependencies (anchor) which the client can later on use as an argument of its PROPOSE message to other
endorsers. As another example, the client could directly send <PROPOSE, tx> (without anchor) to all endorsers
of its choice. Different patterns of communication are possible and client is free to decide on those (see also Section
2.3.).
2.2. The endorsing peer simulates a transaction and produces an endorsement signature
On reception of a <PROPOSE,tx,[anchor]> message from a client, the endorsing peer epID first verifies the
client’s signature clientSig and then simulates a transaction. If the client specifies anchor then endorsing peer
simulates the transactions only upon read version numbers (i.e., readset as defined below) of corresponding keys
in its local KVS match those version numbers specified by anchor.
Simulating a transaction involves endorsing peer tentatively executing a transaction (txPayload), by invoking the
chaincode to which the transaction refers (chaincodeID) and the copy of the state that the endorsing peer locally
holds.
As a result of the execution, the endorsing peer computes read version dependencies (readset) and state updates
(writeset), also called MVCC+postimage info in DB language.
Recall that the state consists of key-value pairs. All key-value entries are versioned; that is, every entry contains
ordered version information, which is incremented each time the value stored under a key is updated. The peer that
interprets the transaction records all key-value pairs accessed by the chaincode, either for reading or for writing, but
the peer does not yet update its state. More specifically:
• Given state s before an endorsing peer executes a transaction, for every key k read by the transaction, pair
(k,s(k).version) is added to readset.
• Additionally, for every key k modified by the transaction to the new value v', pair (k,v') is added to
writeset. Alternatively, v' could be the delta of the new value to previous value (s(k).value).
If a client specifies anchor in the PROPOSE message then client specified anchor must equal readset produced
by endorsing peer when simulating the transaction.
Then, the peer forwards internally tran-proposal (and possibly tx) to the part of its (peer’s) logic that endorses
a transaction, referred to as endorsing logic. By default, endorsing logic at a peer accepts the tran-proposal and
simply signs the tran-proposal. However, endorsing logic may interpret arbitrary functionality, to, e.g., interact
with legacy systems with tran-proposal and tx as inputs to reach the decision whether to endorse a transaction
or not.
If endorsing logic decides to endorse a transaction, it sends <TRANSACTION-ENDORSED, tid,
tran-proposal,epSig> message to the submitting client(tx.clientID), where:
• tran-proposal := (epID,tid,chaincodeID,txContentBlob,readset,writeset),
where txContentBlob is chaincode/transaction specific information. The intention is to have
txContentBlob used as some representation of tx (e.g., txContentBlob=tx.txPayload).
• epSig is the endorsing peer’s signature on tran-proposal
Else, in case the endorsing logic refuses to endorse the transaction, an endorser may send a message
(TRANSACTION-INVALID, tid, REJECTED) to the submitting client.
Notice that an endorser does not change its state in this step, the updates produced by transaction simulation in the
context of endorsement do not affect the state!
2.3. The submitting client collects an endorsement for a transaction and broadcasts it through
ordering service
The submitting client waits until it receives “enough” messages and signatures on (TRANSACTION-ENDORSED,
tid, *, *) statements to conclude that the transaction proposal is endorsed. As discussed in Section 2.1.2., this
may involve one or more round-trips of interaction with endorsers.
The exact number of “enough” depend on the chaincode endorsement policy (see also Section 3). If the endorsement
policy is satisfied, the transaction has been endorsed; note that it is not yet committed. The collection of signed
TRANSACTION-ENDORSED messages from endorsing peers which establish that a transaction is endorsed is called
an endorsement and denoted by endorsement.
If the submitting client does not manage to collect an endorsement for a transaction proposal, it abandons this trans-
action with an option to retry later.
For transaction with a valid endorsement, we now start using the ordering service. The submitting client invokes
ordering service using the broadcast(blob), where blob=endorsement. If the client does not have capability
of invoking ordering service directly, it may proxy its broadcast through some peer of its choice. Such a peer must be
trusted by the client not to remove any message from the endorsement or otherwise the transaction may be deemed
invalid. Notice that, however, a proxy peer may not fabricate a valid endorsement.
When an event deliver(seqno, prevhash, blob) occurs and a peer has applied all state updates for blobs
with sequence number lower than seqno, a peer does the following:
• It checks that the blob.endorsement is valid according to the policy of the chaincode (blob.
tran-proposal.chaincodeID) to which it refers.
• In a typical case, it also verifies that the dependencies (blob.endorsement.tran-proposal.
readset) have not been violated meanwhile. In more complex use cases, tran-proposal fields in en-
dorsement may differ and in this case endorsement policy (Section 3) specifies how the state evolves.
Verification of dependencies can be implemented in different ways, according to a consistency property or “isolation
guarantee” that is chosen for the state updates. Serializability is a default isolation guarantee, unless chaincode
endorsement policy specifies a different one. Serializability can be provided by requiring the version associated with
every key in the readset to be equal to that key’s version in the state, and rejecting transactions that do not satisfy
this requirement.
• If all these checks pass, the transaction is deemed valid or committed. In this case, the peer marks the transaction
with 1 in the bitmask of the PeerLedger, applies blob.endorsement.tran-proposal.writeset
to blockchain state (if tran-proposals are the same, otherwise endorsement policy logic defines the func-
tion that takes blob.endorsement).
• If the endorsement policy verification of blob.endorsement fails, the transaction is invalid and the peer
marks the transaction with 0 in the bitmask of the PeerLedger. It is important to note that invalid transactions
do not change the state.
Note that this is sufficient to have all (correct) peers have the same state after processing a deliver event (block)
with a given sequence number. Namely, by the guarantees of the ordering service, all correct peers will receive
an identical sequence of deliver(seqno, prevhash, blob) events. As the evaluation of the endorsement
policy and evaluation of version dependencies in readset are deterministic, all correct peers will also come to the
same conclusion whether a transaction contained in a blob is valid. Hence, all peers commit and apply the same
sequence of transactions and update their state in the same way.
An endorsement policy, is a condition on what endorses a transaction. Blockchain peers have a pre-specified set of
endorsement policies, which are referenced by a deploy transaction that installs specific chaincode. Endorsement
policies can be parametrized, and these parameters can be specified by a deploy transaction.
To guarantee blockchain and security properties, the set of endorsement policies should be a set of proven policies
with limited set of functions in order to ensure bounded execution time (termination), determinism, performance and
security guarantees.
Dynamic addition of endorsement policies (e.g., by deploy transaction on chaincode deploy time) is very sensitive in
terms of bounded policy evaluation time (termination), determinism, performance and security guarantees. Therefore,
dynamic addition of endorsement policies is not allowed, but can be supported in future.
A transaction is declared valid only if it has been endorsed according to the policy. An invoke transaction for a
chaincode will first have to obtain an endorsement that satisfies the chaincode’s policy or it will not be committed.
This takes place through the interaction between the submitting client and endorsing peers as explained in Section 2.
Formally the endorsement policy is a predicate on the endorsement, and potentially further state that evaluates to TRUE
or FALSE. For deploy transactions the endorsement is obtained according to a system-wide policy (for example, from
the system chaincode).
An endorsement policy predicate refers to certain variables. Potentially it may refer to:
1. keys or identities relating to the chaincode (found in the metadata of the chaincode), for example, a set of
endorsers;
2. further metadata of the chaincode;
The predicate may contain logical expressions and evaluates to TRUE or FALSE. Typically the condition will use
digital signatures on the transaction invocation issued by endorsing peers for the chaincode.
Suppose the chaincode specifies the endorser set E = {Alice, Bob, Charlie, Dave, Eve, Frank,
George}. Some example policies:
• A valid signature from on the same tran-proposal from all members of E.
• A valid signature from any single member of E.
• Valid signatures on the same tran-proposal from endorsing peers according to the condition (Alice OR
Bob) AND (any two of: Charlie, Dave, Eve, Frank, George).
• Valid signatures on the same tran-proposal by any 5 out of the 7 endorsers. (More generally, for chaincode
with n > 3f endorsers, valid signatures by any 2f+1 out of the n endorsers, or by any group of more than
(n+f)/2 endorsers.)
• Suppose there is an assignment of “stake” or “weights” to the endorsers, like {Alice=49, Bob=15,
Charlie=15, Dave=10, Eve=7, Frank=3, George=1}, where the total stake is 100: The pol-
icy requires valid signatures from a set that has a majority of the stake (i.e., a group with combined stake
strictly more than 50), such as {Alice, X} with any X different from George, or {everyone together
except Alice}. And so on.
• The assignment of stake in the previous example condition could be static (fixed in the metadata of the chain-
code) or dynamic (e.g., dependent on the state of the chaincode and be modified during the execution).
• Valid signatures from (Alice OR Bob) on tran-proposal1 and valid signatures from (any two of:
Charlie, Dave, Eve, Frank, George) on tran-proposal2, where tran-proposal1 and
tran-proposal2 differ only in their endorsing peers and state updates.
How useful these policies are will depend on the application, on the desired resilience of the solution against failures
or misbehavior of endorsers, and on various other properties.
To maintain the abstraction of a ledger that contains only valid and committed transactions (that appears in Bitcoin,
for example), peers may, in addition to state and Ledger, maintain the Validated Ledger (or VLedger). This is a hash
chain derived from the ledger by filtering out invalid transactions.
The construction of the VLedger blocks (called here vBlocks) proceeds as follows. As the PeerLedger blocks may
contain invalid transactions (i.e., transactions with invalid endorsement or with invalid version dependencies), such
transactions are filtered out by peers before a transaction from a block becomes added to a vBlock. Every peer does
this by itself (e.g., by using the bitmask associated with PeerLedger). A vBlock is defined as a block without the
invalid transactions, that have been filtered out. Such vBlocks are inherently dynamic in size and may be empty. An
illustration of vBlock construction is given in the figure below.
Figure 2. Illustration of validated ledger block (vBlock) formation from ledger (PeerLedger) blocks.
vBlocks are chained together to a hash chain by every peer. More specifically, every block of a validated ledger
contains:
• The hash of the previous vBlock.
• vBlock number.
• An ordered list of all valid transactions committed by the peers since the last vBlock was computed (i.e., list of
valid transactions in a corresponding block).
• The hash of the corresponding block (in PeerLedger) from which the current vBlock is derived.
All this information is concatenated and hashed by a peer, producing the hash of the vBlock in the validated ledger.
The ledger contains invalid transactions, which may not necessarily be recorded forever. However, peers cannot sim-
ply discard PeerLedger blocks and thereby prune PeerLedger once they establish the corresponding vBlocks.
Namely, in this case, if a new peer joins the network, other peers could not transfer the discarded blocks (pertaining to
PeerLedger) to the joining peer, nor convince the joining peer of the validity of their vBlocks.
To facilitate pruning of the PeerLedger, this document describes a checkpointing mechanism. This mechanism es-
tablishes the validity of the vBlocks across the peer network and allows checkpointed vBlocks to replace the discarded
PeerLedger blocks. This, in turn, reduces storage space, as there is no need to store invalid transactions. It also
reduces the work to reconstruct the state for new peers that join the network (as they do not need to establish validity
of individual transactions when reconstructing the state by replaying PeerLedger, but may simply replay the state
updates contained in the validated ledger).
Checkpointing is performed periodically by the peers every CHK blocks, where CHK is a configurable parameter.
To initiate a checkpoint, the peers broadcast (e.g., gossip) to other peers message <CHECKPOINT,blocknohash,
blockno,stateHash,peerSig>, where blockno is the current blocknumber and blocknohash is its re-
spective hash, stateHash is the hash of the latest state (produced by e.g., a Merkle hash) upon validation of block
blockno and peerSig is peer’s signature on (CHECKPOINT,blocknohash,blockno,stateHash), refer-
ring to the validated ledger.
A peer collects CHECKPOINT messages until it obtains enough correctly signed messages with matching blockno,
blocknohash and stateHash to establish a valid checkpoint (see Section 4.2.2.).
Upon establishing a valid checkpoint for block number blockno with blocknohash, a peer:
• if blockno>latestValidCheckpoint.blockno, then a peer assigns
latestValidCheckpoint=(blocknohash,blockno),
• stores the set of respective peer signatures that constitute a valid checkpoint into the set
latestValidCheckpointProof,
• stores the state corresponding to stateHash to latestValidCheckpointedState,
• (optionally) prunes its PeerLedger up to block number blockno (inclusive).
Clearly, the checkpointing protocol raises the following questions: When can a peer prune its ‘‘PeerLedger‘‘? How
many ‘‘CHECKPOINT‘‘ messages are “sufficiently many”?. This is defined by a checkpoint validity policy, with (at
least) two possible approaches, which may also be combined:
• Local (peer-specific) checkpoint validity policy (LCVP). A local policy at a given peer p may specify a set of
peers which peer p trusts and whose CHECKPOINT messages are sufficient to establish a valid checkpoint. For
example, LCVP at peer Alice may define that Alice needs to receive CHECKPOINT message from Bob, or from
both Charlie and Dave.
• Global checkpoint validity policy (GCVP). A checkpoint validity policy may be specified globally. This is
similar to a local peer policy, except that it is stipulated at the system (blockchain) granularity, rather than peer
granularity. For instance, GCVP may specify that:
– each peer may trust a checkpoint if confirmed by 11 different peers.
– in a specific deployment in which every orderer is collocated with a peer in the same machine (i.e., trust do-
main) and where up to f orderers may be (Byzantine) faulty, each peer may trust a checkpoint if confirmed
by f+1 different peers collocated with orderers.
This document outlines the transactional mechanics that take place during a standard asset exchange. The scenario
includes two clients, A and B, who are buying and selling radishes. They each have a peer on the network through
which they send their transactions and interact with the ledger.
Assumptions
This flow assumes that a channel is set up and running. The application user has registered and enrolled with the orga-
nization’s certificate authority (CA) and received back necessary cryptographic material, which is used to authenticate
to the network.
The chaincode (containing a set of key value pairs representing the initial state of the radish market) is installed on the
peers and instantiated on the channel. The chaincode contains logic defining a set of transaction instructions and the
agreed upon price for a radish. An endorsement policy has also been set for this chaincode, stating that both peerA
and peerB must endorse any transaction.
Note: The MSP is a peer component that allows peers to verify transaction requests arriving from clients and to sign
transaction results (endorsements). The writing policy is defined at channel creation time and determines which users
are entitled to submit a transaction to that channel.
6. Ledger updated
Each peer appends the block to the channel’s chain, and for each valid transaction the write sets are committed to
current state database. An event is emitted, to notify the client application that the transaction (invocation) has been
immutably appended to the chain, as well as notification of whether the transaction was validated or invalidated.
Note: See the sequence diagram to better understand the transaction flow.
Hyperledger Fabric intends to offer a number of SDKs for a wide variety of programming languages. The first two
delivered are the Node.js and Java SDKs. We hope to provide Python, REST and Go SDKs in a subsequent release.
• Hyperledger Fabric Node SDK documentation.
• Hyperledger Fabric Java SDK documentation.
In order to execute chaincode on peers, submit transactions to orderers, and to be updated about the status of transac-
tions, applications connect to an API exposed by an SDK.
However, the SDK needs a lot of information in order to allow applications to connect to the relevant network nodes.
In addition to the CA and TLS certificates of the orderers and peers on the channel – as well as their IP addresses and
port numbers – it must know the relevant endorsement policies as well as which peers have the chaincode installed (so
the application knows which peers to send chaincode proposals to).
Prior to v1.2, this information was statically encoded. However, this implementation is not dynamically reactive to
network changes (such as the addition of peers who have installed the relevant chaincode, or peers that are temporarily
offline). Static configurations also do not allow applications to react to changes of the endorsement policy itself (as
might happen when a new organization joins a channel).
In addition, the client application has no way of knowing which peers have updated ledgers and which do not. As a
result, the application might submit proposals to peers whose ledger data is not in sync with the rest of the network,
resulting in transaction being invalidated upon commit and wasting resources as a consequence.
The discovery service improves this process by having the peers compute the needed information dynamically and
present it to the SDK in a consumable manner.
The application is bootstrapped knowing about a group of peers which are trusted by the application devel-
oper/administrator to provide authentic responses to discovery queries. A good candidate peer to be used by the
client application is one that is in the same organization. Note that in order for peers to be known to the discovery
service, they must have an EXTERNAL_ENDPOINT defined. To see how to do this, check out our Service Discovery
CLI documentation.
The application issues a configuration query to the discovery service and obtains all the static information it would
have otherwise needed to communicate with the rest of the nodes of the network. This information can be refreshed at
any point by sending a subsequent query to the discovery service of a peer.
The service runs on peers – not on the application – and uses the network metadata information maintained by the
gossip communication layer to find out which peers are online. It also fetches information, such as any relevant
endorsement policies, from the peer’s state database.
With service discovery, applications no longer need to specify which peers they need endorsements from. The SDK
can simply send a query to the discovery service asking which peers are needed given a channel and a chaincode ID.
The discovery service will then compute a descriptor comprised of two objects:
1. Layouts: a list of groups of peers and a corresponding amount of peers from each group which should be
selected.
2. Group to peer mapping: from the groups in the layouts to the peers of the channel. In practice, each group
would most likely be peers that represent individual organizations, but because the service API is generic and
ignorant of organizations this is just a “group”.
The following is an example of a descriptor from the evaluation of a policy of AND(Org1, Org2) where there are
two peers in each of the organizations.
Layouts: [
QuantitiesByGroup: {
“Org1”: 1,
“Org2”: 1,
}
],
EndorsersByGroups: {
“Org1”: [peer0.org1, peer1.org1],
“Org2”: [peer0.org2, peer1.org2]
}
In other words, the endorsement policy requires a signature from one peer in Org1 and one peer in Org2. And it
provides the names of available peers in those orgs who can endorse (peer0 and peer1 in both Org1 and in Org2).
The SDK then selects a random layout from the list. In the example above, the endorsement policy is Org1 AND Org2.
If instead it was an OR policy, the SDK would randomly select either Org1 or Org2, since a signature from a peer from
either Org would satisfy the policy.
After the SDK has selected a layout, it selects from the peers in the layout based on a criteria specified on the client
side (the SDK can do this because it has access to metadata like ledger height). For example, it can prefer peers with
higher ledger heights over others – or to exclude peers that the application has discovered to be offline – according to
the number of peers from each group in the layout. If no single peer is preferable based on the criteria, the SDK will
randomly select from the peers that best meet the criteria.
Special requirements
When the peer is running with TLS enabled the client must provide a TLS certificate when connecting to the peer. If the
peer isn’t configured to verify client certificates (clientAuthRequired is false), this TLS certificate can be self-signed.
10.5 Channels
A Hyperledger Fabric channel is a private “subnet” of communication between two or more specific network mem-
bers, for the purpose of conducting private and confidential transactions. A channel is defined by members (organiza-
tions), anchor peers per member, the shared ledger, chaincode application(s) and the ordering service node(s). Each
transaction on the network is executed on a channel, where each party must be authenticated and authorized to transact
on that channel. Each peer that joins a channel, has its own identity given by a membership services provider (MSP),
which authenticates each peer to its channel peers and services.
To create a new channel, the client SDK calls configuration system chaincode and references properties such as
anchor peers, and members (organizations). This request creates a genesis block for the channel ledger,
which stores configuration information about the channel policies, members and anchor peers. When adding a new
member to an existing channel, either this genesis block, or if applicable, a more recent reconfiguration block, is shared
with the new member.
Note: See the Channel Configuration (configtx) section for more details on the properties and proto structures of
config transactions.
The election of a leading peer for each member on a channel determines which peer communicates with the
ordering service on behalf of the member. If no leader is identified, an algorithm can be used to identify the leader.
The consensus service orders transactions and delivers them, in a block, to each leading peer, which then distributes
the block to its member peers, and across the channel, using the gossip protocol.
Although any one anchor peer can belong to multiple channels, and therefore maintain multiple ledgers, no ledger
data can pass from one channel to another. This separation of ledgers, by channel, is defined and implemented by con-
figuration chaincode, the identity membership service and the gossip data dissemination protocol. The dissemination
of data, which includes information on transactions, ledger state and channel membership, is restricted to peers with
verifiable membership on the channel. This isolation of peers and ledger data, by channel, allows network members
that require private and confidential transactions to coexist with business competitors and other restricted members,
on the same blockchain network.
Because Fabric is a distributed system that will usually involve multiple organizations (sometimes in different countries
or even continents), it is possible (and typical) that many different versions of Fabric code will exist in the network.
Nevertheless, it’s vital that networks process transactions in the same way so that everyone has the same view of the
current network state.
This means that every network – and every channel within that network – must define a set of what we call “capabili-
ties” to be able to participate in processing transactions. For example, Fabric v1.1 introduces new MSP role types of
“Peer” and “Client”. However, if a v1.0 peer does not understand these new role types, it will not be able to appro-
priately evaluate an endorsement policy that references them. This means that before the new role types may be used,
the network must agree to enable the v1.1 channel capability requirement, ensuring that all peers come to the same
decision.
Only binaries which support the required capabilities will be able to participate in the channel, and newer binary
versions will not enable new validation logic until the corresponding capability is enabled. In this way, capability
requirements ensure that even with disparate builds and versions, it is not possible for the network to suffer a state
fork.
Capability requirements are defined per channel in the channel configuration (found in the channel’s most recent
configuration block). The channel configuration contains three locations, each of which defines a capability of a
different type.
• Channel: these capabilities apply to both peer and orderers and are located in the root Channel group.
• Orderer: apply to orderers only and are located in the Orderer group.
• Application: apply to peers only and are located in the Application group.
The capabilities are broken into these groups in order to align with the existing administrative structure. Updating
orderer capabilities is something the ordering orgs would manage independent of the application orgs. Similarly,
updating application capabilities is something only the application admins would manage. By splitting the capabilities
between “Orderer” and “Application”, a hypothetical network could run a v1.6 ordering service while supporting a
v1.3 peer application network.
However, some capabilities cross both the ‘Application’ and ‘Orderer’ groups. As we saw earlier, adding a new MSP
role type is something both the orderer and application admins agree to and need to recognize. The orderer must
understand the meaning of MSP roles in order to allow the transactions to pass through ordering, while the peers must
understand the roles in order to validate the transaction. These kinds of capabilities – which span both the application
and orderer components – are defined in the top level “Channel” group.
Note: It is possible that the channel capabilities are defined to be at version v1.3 while the orderer and application
capabilities are defined to be at version 1.1 and v1.4, respectively. Enabling a capability at the “Channel” group level
does not imply that this same capability is available at the more specific “Orderer” and “Application” group levels.
Capabilities are set as part of the channel configuration (either as part of the initial configuration – which we’ll talk
about in a moment – or as part of a reconfiguration).
Note: We have a two documents that talk through different aspects of channel reconfigurations. First, we have a
tutorial that will take you through the process of Adding an Org to a Channel. And we also have a document that talks
through Updating a Channel Configuration which gives an overview of the different kinds of updates that are possible
as well as a fuller look at the signature process.
Because new channels copy the configuration of the Orderer System Channel by default, new channels will auto-
matically be configured to work with the orderer and channel capabilities of the Orderer System Channel and the
application capabilities specified by the channel creation transaction. Channels that already exist, however, must be
reconfigured.
The schema for the Capabilities value is defined in the protobuf as:
message Capabilities {
map<string, Capability> capabilities = 1;
}
message Capability { }
{
"capabilities": {
"V1_1": {}
}
}
In the configtx.yaml file distributed in the config directory of the release artifacts, there is a Capabilities
section which enumerates the possible capabilities for each capability type (Channel, Orderer, and Application).
The simplest way to enable capabilities is to pick a v1.1 sample profile and customize it for your network. For example:
SampleSingleMSPSoloV1_1:
Capabilities:
<<: *GlobalCapabilities
Orderer:
<<: *OrdererDefaults
Organizations:
- *SampleOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
SampleConsortium:
Organizations:
- *SampleOrg
Note that there is a Capabilities section defined at the root level (for the channel capabilities), and at the Orderer
level (for orderer capabilities). The sample above uses a YAML reference to include the capabilities as defined at the
bottom of the YAML.
When defining the orderer system channel there is no Application section, as those capabilities are defined during the
creation of an application channel. To define a new channel’s application capabilities at channel creation time, the
application admins should model their channel creation transaction after the SampleSingleMSPChannelV1_1
profile.
SampleSingleMSPChannelV1_1:
Consortium: SampleConsortium
Application:
Organizations:
- *SampleOrg
Capabilities:
<<: *ApplicationCapabilities
Here, the Application section has a new element Capabilities which references the
ApplicationCapabilities section defined at the end of the YAML.
Note: The capabilities for the Channel and Orderer sections are inherited from the definition in the ordering system
channel and are automatically included by the orderer during the process of channel creation.
State database options include LevelDB and CouchDB. LevelDB is the default key-value state database embedded
in the peer process. CouchDB is an optional alternative external state database. Like the LevelDB key-value store,
CouchDB can store any binary data that is modeled in chaincode (CouchDB attachment functionality is used internally
for non-JSON binary data). But as a JSON document store, CouchDB additionally enables rich query against the
chaincode data, when chaincode values (e.g. assets) are modeled as JSON data.
Both LevelDB and CouchDB support core chaincode operations such as getting and setting a key (asset), and querying
based on keys. Keys can be queried by range, and composite keys can be modeled to enable equivalence queries against
multiple parameters. For example a composite key of owner,asset_id can be used to query all assets owned by a
certain entity. These key-based queries can be used for read-only queries against the ledger, as well as in transactions
that update the ledger.
If you model assets as JSON and use CouchDB, you can also perform complex rich queries against the chaincode
data values, using the CouchDB JSON query language within chaincode. These types of queries are excellent for
understanding what is on the ledger. Proposal responses for these types of queries are typically useful to the client
application, but are not typically submitted as transactions to the ordering service. In fact, there is no guarantee
the result set is stable between chaincode execution and commit time for rich queries, and therefore rich queries are
not appropriate for use in update transactions, unless your application can guarantee the result set is stable between
chaincode execution time and commit time, or can handle potential changes in subsequent transactions. For example,
if you perform a rich query for all assets owned by Alice and transfer them to Bob, a new asset may be assigned to
Alice by another transaction between chaincode execution time and commit time, and you would miss this “phantom”
item.
CouchDB runs as a separate database process alongside the peer, therefore there are additional considerations in terms
of setup, management, and operations. You may consider starting with the default embedded LevelDB, and move to
CouchDB if you require the additional complex rich queries. It is a good practice to model chaincode asset data as
JSON, so that you have the option to perform complex rich queries if needed in the future.
Note: The key for a CouchDB JSON document cannot begin with an underscore (“_”). Also, a JSON document
cannot use the following field names at the top level. These are reserved for internal use.
• Any field beginning with an underscore, "_"
• ~version
Chaincode queries
Most of the chaincode shim APIs can be utilized with either LevelDB or CouchDB state database, e.g. GetState,
PutState, GetStateByRange, GetStateByPartialCompositeKey. Additionally when you utilize
CouchDB as the state database and model assets as JSON in chaincode, you can perform rich queries against the
JSON in the state database by using the GetQueryResult API and passing a CouchDB query string. The query
string follows the CouchDB JSON query syntax.
The marbles02 fabric sample demonstrates use of CouchDB queries from chaincode. It includes a
queryMarblesByOwner() function that demonstrates parameterized queries by passing an owner id into chain-
code. It then queries the state data for JSON documents matching the docType of “marble” and the owner id using the
JSON query syntax:
{"selector":{"docType":"marble","owner":<OWNER_ID>}}
CouchDB pagination
Fabric supports paging of query results for rich queries and range based queries. APIs supporting pagination allow
the use of page size and bookmarks to be used for both range and rich queries. To support efficient pagination, the
Fabric pagination APIs must be used. Specifically, the CouchDB limit keyword will not be honored in CouchDB
queries since Fabric itself manages the pagination of query results and implicitly sets the pageSize limit that is passed
to CouchDB.
If a pageSize is specified using the paginated query APIs (GetStateByRangeWithPagination(),
GetStateByPartialCompositeKeyWithPagination(), and GetQueryResultWithPagination()),
a set of results (bound by the pageSize) will be returned to the chaincode along with a bookmark. The bookmark can
be returned from chaincode to invoking clients, which can use the bookmark in a follow on query to receive the next
“page” of results.
The pagination APIs are for use in read-only transactions only, the query results are intended to support client paging
requirements. For transactions that need to read and write, use the non-paginated chaincode query APIs. Within
chaincode you can iterate through result sets to your desired depth.
Regardless of whether the pagination APIs are utilized, all chaincode queries are bound by totalQueryLimit
(default 100000) from core.yaml. This is the maximum number of results that chaincode will iterate through and
return to the client, in order to avoid accidental or malicious long-running queries.
Note: Regardless of whether chaincode uses paginated queries or not, the peer will query CouchDB in batches based
on internalQueryLimit (default 1000) from core.yaml. This behavior ensures reasonably sized result sets
are passed between the peer and CouchDB when executing chaincode, and is transparent to chaincode and the calling
client.
CouchDB indexes
Indexes in CouchDB are required in order to make JSON queries efficient and are required for any JSON query with
a sort. Indexes can be packaged alongside chaincode in a /META-INF/statedb/couchdb/indexes directory.
Each index must be defined in its own text file with extension *.json with the index definition formatted in JSON
following the CouchDB index JSON syntax. For example, to support the above marble query, a sample index on the
docType and owner fields is provided:
{"index":{"fields":["docType","owner"]},"ddoc":"indexOwnerDoc", "name":"indexOwner",
˓→"type":"json"}
Upon deployment, the index will automatically be utilized by chaincode queries. CouchDB can automatically deter-
mine which index to use based on the fields being used in a query. Alternatively, in the selector query the index can be
specified using the use_index keyword.
The same index may exist in subsequent versions of the chaincode that gets installed. To change the index, use the
same index name but alter the index definition. Upon installation/instantiation, the index definition will get re-deployed
to the peer’s state database.
If you have a large volume of data already, and later install the chaincode, the index creation upon installation may take
some time. Similarly, if you have a large volume of data already and instantiate a subsequent version of the chaincode,
the index creation may take some time. Avoid calling chaincode functions that query the state database at these times
as the chaincode query may time out while the index is getting initialized. During transaction processing, the indexes
will automatically get refreshed as blocks are committed to the ledger.
CouchDB is enabled as the state database by changing the stateDatabase configuration option from goleveldb
to CouchDB. Additionally, the couchDBAddress needs to configured to point to the CouchDB to be used by the
peer. The username and password properties should be populated with an admin username and password if CouchDB
is configured with a username and password. Additional options are provided in the couchDBConfig section and
are documented in place. Changes to the core.yaml will be effective immediately after restarting the peer.
You can also pass in docker environment variables to override core.yaml values, for example
CORE_LEDGER_STATE_STATEDATABASE and CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS.
Below is the stateDatabase section from core.yaml:
state:
# stateDatabase - options are "goleveldb", "CouchDB"
# goleveldb - default state database stored in goleveldb.
# CouchDB - store state database in CouchDB
stateDatabase: goleveldb
# Limit on the number of records to return per query
totalQueryLimit: 10000
couchDBConfig:
# It is recommended to run CouchDB on the same server as the peer, and
# not map the CouchDB container port to a server port in docker-compose.
# Otherwise proper security must be provided on the connection between
# CouchDB client (on the peer) and server.
couchDBAddress: couchdb:5984
# This username must have read and write authority on CouchDB
username:
# The password is recommended to pass as an environment variable
# during start up (e.g. LEDGER_COUCHDBCONFIG_PASSWORD).
# If it is stored here, the file must be access control protected
# to prevent unintended users from discovering the password.
password:
# Number of retries for CouchDB errors
maxRetries: 3
# Number of retries for CouchDB errors during peer startup
maxRetriesOnStartup: 10
# CouchDB request timeout (unit: duration, e.g. 20s)
requestTimeout: 35s
# Limit on the number of records per each CouchDB query
# Note that chaincode queries are only bound by totalQueryLimit.
# Internally the chaincode may execute multiple CouchDB queries,
# each of size internalQueryLimit.
(continues on next page)
CouchDB hosted in docker containers supplied with Hyperledger Fabric have the capability of setting the CouchDB
username and password with environment variables passed in with the COUCHDB_USER and COUCHDB_PASSWORD
environment variables using Docker Compose scripting.
For CouchDB installations outside of the docker images supplied with Fabric, the local.ini file of that installation must
be edited to set the admin username and password.
Docker compose scripts only set the username and password at the creation of the container. The local.ini file must be
edited if the username or password is to be changed after creation of the container.
Avoid using chaincode for queries that will result in a scan of the entire CouchDB database. Full length database
scans will result in long response times and will degrade the performance of your network. You can take some of the
following steps to avoid long queries:
• When using JSON queries:
– Be sure to create indexes in the chaincode package.
– Avoid query operators such as $or, $in and $regex, which lead to full database scans.
• For range queries, composite key queries, and JSON queries:
– Utilize paging support (as of v1.3) instead of one large result set.
• If you want to build a dashboard or collect aggregate data as part of your application, you can query an off-chain
database that replicates the data from your blockchain network. This will allow you to query and analyze the
blockchain data in a data store optimized for your needs, without degrading the performance of your network or
disrupting transactions. To achieve this, applications may use block or chaincode events to write transaction data
to an off-chain database or analytics engine. For each block received, the block listener application would iterate
through the block transactions and build a data store using the key/value writes from each valid transaction’s
rwset. The Peer channel-based event services provide replayable events to ensure the integrity of downstream
data stores.
In previous versions of Fabric, the peer event service was known as the event hub. This service sent events any time
a new block was added to the peer’s ledger, regardless of the channel to which that block pertained, and it was only
accessible to members of the organization running the eventing peer (i.e., the one being connected to for events).
Starting with v1.1, there are two new services which provide events. These services use an entirely different design to
provide events on a per-channel basis. This means that registration for events occurs at the level of the channel instead
of the peer, allowing for fine-grained control over access to the peer’s data. Requests to receive events are accepted
from identities outside of the peer’s organization (as defined by the channel configuration). This also provides greater
reliability and a way to receive events that may have been missed (whether due to a connectivity issue or because the
peer is joining a network that has already been running).
• Deliver
This service sends entire blocks that have been committed to the ledger. If any events were set by a chaincode, these
can be found within the ChaincodeActionPayload of the block.
• DeliverFiltered
This service sends “filtered” blocks, minimal sets of information about blocks that have been committed to the ledger.
It is intended to be used in a network where owners of the peers wish for external clients to primarily receive informa-
tion about their transactions and the status of those transactions. If any events were set by a chaincode, these can be
found within the FilteredChaincodeAction of the filtered block.
Note: The payload of chaincode events will not be included in filtered blocks.
Registration for events from either service is done by sending an envelope containing a deliver seek info message to
the peer that contains the desired start and stop positions, the seek behavior (block until ready or fail if not ready).
There are helper variables SeekOldest and SeekNewest that can be used to indicate the oldest (i.e. first) block
or the newest (i.e. last) block on the ledger. To have the services send events indefinitely, the SeekInfo message
should include a stop position of MAXINT64.
Note: If mutual TLS is enabled on the peer, the TLS certificate hash must be set in the envelope’s channel header.
By default, both services use the Channel Readers policy to determine whether to authorize requesting clients for
events.
• status – HTTP status code. Both services will return the appropriate failure code if any failure occurs; other-
wise, it will return 200 - SUCCESS once the service has completed sending all information requested by the
SeekInfo message.
• block – returned only by the Deliver service.
• filtered block – returned only by the DeliverFiltered service.
A filtered block contains:
• channel ID.
• number (i.e. the block number).
• array of filtered transactions.
• transaction ID.
– type (e.g. ENDORSER_TRANSACTION, CONFIG.
– transaction validation code.
• filtered transaction actions.
– array of filtered chaincode actions.
* chaincode event for the transaction (with the payload nilled out).
For further details on using the event services, refer to the SDK documentation.
Note: This topic assumes an understanding of the conceptual material in the documentation on private data.
A collection definition contains one or more collections, each having a policy definition listing the organizations in
the collection, as well as properties used to control dissemination of private data at endorsement time and, optionally,
whether the data will be purged.
The collection definition gets deployed to the channel at the time of chaincode instantiation (or upgrade). If using the
peer CLI to instantiate the chaincode, the collection definition file is passed to the chaincode instantiation using the
--collections-config flag. If using a client SDK, check the SDK documentation for information on providing
the collection definition.
Collection definitions are composed of the following properties:
• name: Name of the collection.
• policy: The private data collection distribution policy defines which organizations’ peers are allowed to
persist the collection data expressed using the Signature policy syntax, with each member being included in
an OR signature policy list. To support read/write transactions, the private data distribution policy must define a
broader set of organizations than the chaincode endorsement policy, as peers must have the private data in order
to endorse proposed transactions. For example, in a channel with ten organizations, five of the organizations
might be included in a private data collection distribution policy, but the endorsement policy might call for any
three of the organizations to endorse.
• requiredPeerCount: Minimum number of peers (across authorized organizations) that each endorsing peer
must successfully disseminate private data to before the peer signs the endorsement and returns the proposal
response back to the client. Requiring dissemination as a condition of endorsement will ensure that private data
is available in the network even if the endorsing peer(s) become unavailable. When requiredPeerCount is
0, it means that no distribution is required, but there may be some distribution if maxPeerCount is greater
than zero. A requiredPeerCount of 0 would typically not be recommended, as it could lead to loss of
private data in the network if the endorsing peer(s) becomes unavailable. Typically you would want to require
at least some distribution of the private data at endorsement time to ensure redundancy of the private data on
multiple peers in the network.
• maxPeerCount: For data redundancy purposes, the maximum number of other peers (across authorized or-
ganizations) that each endorsing peer will attempt to distribute the private data to. If an endorsing peer becomes
unavailable between endorsement time and commit time, other peers that are collection members but who did
not yet receive the private data at endorsement time, will be able to pull the private data from peers the private
data was disseminated to. If this value is set to 0, the private data is not disseminated at endorsement time,
forcing private data pulls against endorsing peers on all authorized peers at commit time.
• blockToLive: Represents how long the data should live on the private database in terms of blocks. The data
will live for this specified number of blocks on the private database and after that it will get purged, making this
data obsolete from the network. To keep private data indefinitely, that is, to never purge private data, set the
blockToLive property to 0.
• memberOnlyRead: a value of true indicates that peers automatically enforce that only clients belonging
to one of the collection member organizations are allowed read access to private data. If a client from a non-
member org attempts to execute a chaincode function that performs a read of a private data, the chaincode
invocation is terminated with an error. Utilize a value of false if you would like to encode more granular
access control within individual chaincode functions.
Here is a sample collection definition JSON file, containing an array of two collection definitions:
[
{
"name": "collectionMarbles",
"policy": "OR('Org1MSP.member', 'Org2MSP.member')",
"requiredPeerCount": 0,
"maxPeerCount": 3,
"blockToLive":1000000,
"memberOnlyRead": true
},
{
"name": "collectionMarblePrivateDetails",
"policy": "OR('Org1MSP.member')",
"requiredPeerCount": 0,
"maxPeerCount": 3,
"blockToLive":3,
"memberOnlyRead": true
}
]
This example uses the organizations from the BYFN sample network, Org1 and Org2 . The policy in the
collectionMarbles definition authorizes both organizations to the private data. This is a typical configura-
tion when the chaincode data needs to remain private from the ordering service nodes. However, the policy in the
collectionMarblePrivateDetails definition restricts access to a subset of organizations in the channel (in
this case Org1 ). In a real scenario, there would be many organizations in the channel, with two or more organizations
in each collection sharing private data between them.
Endorsement
Since private data is not included in the transactions that get submitted to the ordering service, and therefore not
included in the blocks that get distributed to all peers in a channel, the endorsing peer plays an important role in
disseminating private data to other peers of authorized organizations. This ensures the availability of private data
in the channel’s collection, even if endorsing peers become unavailable after their endorsement. To assist with this
dissemination, the maxPeerCount and requiredPeerCount properties in the collection definition control the
degree of dissemination at endorsement time.
If the endorsing peer cannot successfully disseminate the private data to at least the requiredPeerCount, it will
return an error back to the client. The endorsing peer will attempt to disseminate the private data to peers of different
organizations, in an effort to ensure that each authorized organization has a copy of the private data. Since transactions
are not committed at chaincode execution time, the endorsing peer and recipient peers store a copy of the private data
in a local transient store alongside their blockchain until the transaction is committed.
When authorized peers do not have a copy of the private data in their transient data store at commit time (either because
they were not an endorsing peer or because they did not receive the private data via dissemination at endorsement time),
they will attempt to pull the private data from another authorized peer, for a configurable amount of time based on the
peer property peer.gossip.pvtData.pullRetryThreshold in the peer configuration core.yaml file.
Note: The peers being asked for private data will only return the private data if the requesting peer is a member of
the collection as defined by the private data dissemination policy.
Note: For collections to work, it is important to have cross organizational gossip configured correctly. Refer to our
documentation on Gossip data dissemination protocol, paying particular attention to the section on “anchor peers”.
A set of shim APIs are available for setting and retrieving private data.
The same chaincode data operations can be applied to channel state data and private data, but in the case
of private data, a collection name is specified along with the data in the chaincode APIs, for example
PutPrivateData(collection,key,value) and GetPrivateData(collection,key).
Since the chaincode proposal gets stored on the blockchain, it is also important not to include private data in the main
part of the chaincode proposal. A special field in the chaincode proposal called the transient field can be used to
pass private data from the client (or data that chaincode will use to generate private data), to chaincode invocation on
the peer. The chaincode can retrieve the transient field by calling the GetTransient() API. This transient field
gets excluded from the channel transaction.
Reconciliation
Starting in v1.4, a background process allows peers who are part of a collection to receive data they were entitled to
receive but did not yet receive — because of a network failure, for example — by keeping track of private data that
was “missing” at the time of block commit. The peer will periodically attempt to fetch the private data from other
collection member peers that are expected to have it.
This “reconciliation” also applies to peers of new organizations that are added to an existing collection. The same
background process described above will also attempt to fetch private data that was committed before they joined the
collection.
Note that this private data reconciliation feature only works on peers running v1.4 or later of Fabric.
Until version 1.3, access control to private data based on collection membership was enforced for peers only. Ac-
cess control based on the organization of the chaincode proposal submitter was required to be encoded in chaincode
logic. Starting in v1.4 a collection configuration option memberOnlyRead can automatically enforce access control
based on the organization of the chaincode proposal submitter. For more information about collection configuration
definitions and how to set them, refer back to the Private data collection definition section of this topic.
Note: If you would like more granular access control, you can set memberOnlyRead to false. You can then apply
your own access control logic in chaincode, for example by calling the GetCreator() chaincode API or using the client
identity chaincode library .
Private collection data can be queried just like normal channel data, using shim APIs:
• GetPrivateDataByRange(collection, startKey, endKey string)
• GetPrivateDataByPartialCompositeKey(collection, objectType string, keys
[]string)
And for the CouchDB state database, JSON content queries can be passed using the shim API:
• GetPrivateDataQueryResult(collection, query string)
Limitations:
• Clients that call chaincode that executes range or rich JSON queries should be aware that they may receive a
subset of the result set, if the peer they query has missing private data, based on the explanation in Private Data
Dissemination section above. Clients can query multiple peers and compare the results to determine if a peer
may be missing some of the result set.
• Chaincode that executes range or rich JSON queries and updates data in a single transaction is not supported, as
the query results cannot be validated on the peers that don’t have access to the private data, or on peers that are
missing the private data that they have access to. If a chaincode invocation both queries and updates private data,
the proposal request will return an error. If your application can tolerate result set changes between chaincode
execution and validation/commit time, then you could call one chaincode function to perform the query, and then
call a second chaincode function to make the updates. Note that calls to GetPrivateData() to retrieve individual
keys can be made in the same transaction as PutPrivateData() calls, since all peers can validate key reads based
on the hashed key version.
The topic CouchDB as the State Database describes indexes that can be applied to the channel’s state database to
enable JSON content queries, by packaging indexes in a META-INF/statedb/couchdb/indexes directory at
chaincode installation time. Similarly, indexes can also be applied to private data collections, by packaging indexes in
a META-INF/statedb/couchdb/collections/<collection_name>/indexes directory. An example
index is available here.
To keep private data indefinitely, that is, to never purge private data, set blockToLive property to 0.
Recall that prior to commit, peers store private data in a local transient data store. This data automatically gets purged
when the transaction commits. But if a transaction was never submitted to the channel and therefore never committed,
the private data would remain in each peer’s transient store. This data is purged from the transient store after a config-
urable number blocks by using the peer’s peer.gossip.pvtData.transientstoreMaxBlockRetention
property in the peer core.yaml file.
If a collection is referenced by a chaincode, the chaincode will use the prior collection definition unless a new collection
definition is specified at upgrade time. If a collection configuration is specified during the upgrade, a definition for
each of the existing collections must be included, and you can add new collection definitions.
Collection updates becomes effective when a peer commits the block that contains the chaincode upgrade transaction.
Note that collections cannot be deleted, as there may be prior private data hashes on the channel’s blockchain that
cannot be removed.
This document discusses the details of the current implementation about the semantics of read-write sets.
During simulation of a transaction at an endorser, a read-write set is prepared for the transaction. The read set
contains a list of unique keys and their committed versions that the transaction reads during simulation. The write
set contains a list of unique keys (though there can be overlap with the keys present in the read set) and their new
values that the transaction writes. A delete marker is set (in the place of new value) for the key if the update performed
by the transaction is to delete the key.
Further, if the transaction writes a value multiple times for a key, only the last written value is retained. Also, if a
transaction reads a value for a key, the value in the committed state is returned even if the transaction has updated the
value for the key before issuing the read. In another words, Read-your-writes semantics are not supported.
As noted earlier, the versions of the keys are recorded only in the read set; the write set just contains the list of unique
keys and their latest values set by the transaction.
There could be various schemes for implementing versions. The minimal requirement for a versioning scheme is to
produce non-repeating identifiers for a given key. For instance, using monotonically increasing numbers for versions
can be one such scheme. In the current implementation, we use a blockchain height based versioning scheme in which
the height of the committing transaction is used as the latest version for all the keys modified by the transaction. In
this scheme, the height of a transaction is represented by a tuple (txNumber is the height of the transaction within
the block). This scheme has many advantages over the incremental number scheme - primarily, it enables other
components such as statedb, transaction simulation and validation for making efficient design choices.
Following is an illustration of an example read-write set prepared by simulation of a hypothetical transaction. For the
sake of simplicity, in the illustrations, we use the incremental numbers for representing the versions.
<TxReadWriteSet>
<NsReadWriteSet name="chaincode1">
<read-set>
<read key="K1", version="1">
<read key="K2", version="1">
</read-set>
<write-set>
<write key="K1", value="V1"
<write key="K3", value="V2"
<write key="K4", isDelete="true"
</write-set>
</NsReadWriteSet>
<TxReadWriteSet>
Additionally, if the transaction performs a range query during simulation, the range query as well as its results will be
added to the read-write set as query-info.
10.10.2 Transaction validation and updating world state using read-write set
A committer uses the read set portion of the read-write set for checking the validity of a transaction and the write
set portion of the read-write set for updating the versions and the values of the affected keys.
In the validation phase, a transaction is considered valid if the version of each key present in the read set of the
transaction matches the version for the same key in the world state - assuming all the preceding valid transactions
(including the preceding transactions in the same block) are committed (committed-state). An additional validation is
performed if the read-write set also contains one or more query-info.
This additional validation should ensure that no key has been inserted/deleted/updated in the super range (i.e., union
of the ranges) of the results captured in the query-info(s). In other words, if we re-execute any of the range queries
(that the transaction performed during simulation) during validation on the committed-state, it should yield the same
results that were observed by the transaction at the time of simulation. This check ensures that if a transaction observes
phantom items during commit, the transaction should be marked as invalid. Note that the this phantom protection is
limited to range queries (i.e., GetStateByRange function in the chaincode) and not yet implemented for other
queries (i.e., GetQueryResult function in the chaincode). Other queries are at risk of phantoms, and should
therefore only be used in read-only transactions that are not submitted to ordering, unless the application can guarantee
the stability of the result set between simulation and validation/commit time.
If a transaction passes the validity check, the committer uses the write set for updating the world state. In the update
phase, for each key present in the write set, the value in the world state for the same key is set to the value as specified
in the write set. Further, the version of the key in the world state is changed to reflect the latest version.
This section helps with understanding the semantics through an example scenario. For the purpose of this example,
the presence of a key, k, in the world state is represented by a tuple (k,ver,val) where ver is the latest version
of the key k having val as its value.
Now, consider a set of five transactions T1, T2, T3, T4, and T5, all simulated on the same snapshot of the
world state. The following snippet shows the snapshot of the world state against which the transactions are simulated
and the sequence of read and write activities performed by each of these transactions.
Now, assume that these transactions are ordered in the sequence of T1,..,T5 (could be contained in a single block or
different blocks)
1. T1 passes validation because it does not perform any read. Further, the tuple of keys k1 and k2 in the world
state are updated to (k1,2,v1'), (k2,2,v2')
2. T2 fails validation because it reads a key, k1, which was modified by a preceding transaction - T1
3. T3 passes the validation because it does not perform a read. Further the tuple of the key, k2, in the world state
is updated to (k2,3,v2'')
4. T4 fails the validation because it reads a key, k2, which was modified by a preceding transaction T1
5. T5 passes validation because it reads a key, k5, which was not modified by any of the preceding transactions
Note: Transactions with multiple read-write sets are not yet supported.
Hyperledger Fabric optimizes blockchain network performance, security, and scalability by dividing workload across
transaction execution (endorsing and committing) peers and transaction ordering nodes. This decoupling of network
operations requires a secure, reliable and scalable data dissemination protocol to ensure data integrity and consistency.
To meet these requirements, Fabric implements a gossip data dissemination protocol.
Peers leverage gossip to broadcast ledger and channel data in a scalable fashion. Gossip messaging is continuous, and
each peer on a channel is constantly receiving current and consistent ledger data from multiple peers. Each gossiped
message is signed, thereby allowing Byzantine participants sending faked messages to be easily identified and the
distribution of the message(s) to unwanted targets to be prevented. Peers affected by delays, network partitions, or
other causes resulting in missed blocks will eventually be synced up to the current ledger state by contacting peers in
possession of these missing blocks.
The gossip-based data dissemination protocol performs three primary functions on a Fabric network:
1. Manages peer discovery and channel membership, by continually identifying available member peers, and even-
tually detecting peers that have gone offline.
2. Disseminates ledger data across all peers on a channel. Any peer with data that is out of sync with the rest of
the channel identifies the missing blocks and syncs itself by copying the correct data.
3. Bring newly connected peers up to speed by allowing peer-to-peer state transfer update of ledger data.
Gossip-based broadcasting operates by peers receiving messages from other peers on the channel, and then forwarding
these messages to a number of randomly selected peers on the channel, where this number is a configurable constant.
Peers can also exercise a pull mechanism rather than waiting for delivery of a message. This cycle repeats, with the
result of channel membership, ledger and state information continually being kept current and in sync. For dissem-
ination of new blocks, the leader peer on the channel pulls the data from the ordering service and initiates gossip
dissemination to peers in its own organization.
The leader election mechanism is used to elect one peer per organization which will maintain connection with the
ordering service and initiate distribution of newly arrived blocks across the peers of its own organization. Leveraging
leader election provides the system with the ability to efficiently utilize the bandwidth of the ordering service. There
are two possible modes of operation for a leader election module:
1. Static — a system administrator manually configures a peer in an organization to be the leader.
2. Dynamic — peers execute a leader election procedure to select one peer in an organization to become leader.
Static leader election allows you to manually define one or more peers within an organization as leader peers. Please
note, however, that having too many peers connect to the ordering service may result in inefficient use of bandwidth.
To enable static leader election mode, configure the following parameters within the section of core.yaml:
peer:
# Gossip related configuration
gossip:
useLeaderElection: false
orgLeader: true
Alternatively these parameters could be configured and overridden with environmental variables:
export CORE_PEER_GOSSIP_USELEADERELECTION=false
export CORE_PEER_GOSSIP_ORGLEADER=true
Note: The following configuration will keep peer in stand-by mode, i.e. peer will not try to become a leader:
export CORE_PEER_GOSSIP_USELEADERELECTION=false
export CORE_PEER_GOSSIP_ORGLEADER=false
3. In static configuration organization admin is responsible to provide high availability of the leader node in case
for failure or crashes.
Dynamic leader election enables organization peers to elect one peer which will connect to the ordering service and
pull out new blocks. This leader is elected for an organization’s peers independently.
A dynamically elected leader sends heartbeat messages to the rest of the peers as an evidence of liveness. If one or
more peers don’t receive heartbeats updates during a set period of time, they will elect a new leader.
In the worst case scenario of a network partition, there will be more than one active leader for organization to guarantee
resiliency and availability to allow an organization’s peers to continue making progress. After the network partition
has been healed, one of the leaders will relinquish its leadership. In a steady state with no network partitions, there
will be only one active leader connecting to the ordering service.
Following configuration controls frequency of the leader heartbeat messages:
peer:
# Gossip related configuration
gossip:
election:
leaderAliveThreshold: 10s
In order to enable dynamic leader election, the following parameters need to be configured within core.yaml:
peer:
# Gossip related configuration
gossip:
useLeaderElection: true
orgLeader: false
Alternatively these parameters could be configured and overridden with environment variables:
export CORE_PEER_GOSSIP_USELEADERELECTION=true
export CORE_PEER_GOSSIP_ORGLEADER=false
Anchor peers are used by gossip to make sure peers in different organizations know about each other.
When a configuration block that contains an update to the anchor peers is committed, peers reach out to the anchor
peers and learn from them about all of the peers known to the anchor peer(s). Once at least one peer from each
organization has contacted an anchor peer, the anchor peer learns about every peer in the channel. Since gossip
communication is constant, and because peers always ask to be told about the existence of any peer they don’t know
about, a common view of membership can be established for a channel.
For example, let’s assume we have three organizations—A, B, C— in the channel and a single anchor
peer—peer0.orgC— defined for organization C. When peer1.orgA (from organization A) contacts peer0.orgC, it will
tell it about peer0.orgA. And when at a later time peer1.orgB contacts peer0.orgC, the latter would tell the former
about peer0.orgA. From that point forward, organizations A and B would start exchanging membership information
directly without any assistance from peer0.orgC.
As communication across organizations depends on gossip in order to work, there must be at least one anchor peer
defined in the channel configuration. It is strongly recommended that every organization provides its own set of anchor
peers for high availability and redundancy. Note that the anchor peer does not need to be the same peer as the leader
peer.
In order for gossip to work effectively, peers need to be able to obtain the endpoint information of peers in their own
organization as well as from peers in other organizations.
When a peer is bootstrapped it will use peer.gossip.bootstrap in its core.yaml to advertise itself and
exchange membership information, building a view of all available peers within its own organization.
The peer.gossip.bootstrap property in the core.yaml of the peer is used to bootstrap gossip within an
organization. If you are using gossip, you will typically configure all the peers in your organization to point to an
initial set of bootstrap peers (you can specify a space-separated list of peers). The internal endpoint is usually auto-
computed by the peer itself or just passed explicitly via core.peer.address in core.yaml. If you need to
overwrite this value, you can export CORE_PEER_GOSSIP_ENDPOINT as an environment variable.
Bootstrap information is similarly required to establish communication across organizations. The initial cross-
organization bootstrap information is provided via the “anchor peers” setting described above. If you want
to make other peers in your organization known to other organizations, you need to set the peer.gossip.
externalendpoint in the core.yaml of your peer. If this is not set, the endpoint information of the peer
will not be broadcast to peers in other organizations.
To set these properties, issue:
Online peers indicate their availability by continually broadcasting “alive” messages, with each containing the public
key infrastructure (PKI) ID and the signature of the sender over the message. Peers maintain channel membership by
collecting these alive messages; if no peer receives an alive message from a specific peer, this “dead” peer is eventually
purged from channel membership. Because “alive” messages are cryptographically signed, malicious peers can never
impersonate other peers, as they lack a signing key authorized by a root certificate authority (CA).
In addition to the automatic forwarding of received messages, a state reconciliation process synchronizes world state
across peers on each channel. Each peer continually pulls blocks from other peers on the channel, in order to repair
its own state if discrepancies are identified. Because fixed connectivity is not required to maintain gossip-based data
dissemination, the process reliably provides data consistency and integrity to the shared ledger, including tolerance for
node crashes.
Because channels are segregated, peers on one channel cannot message or share information on any other channel.
Though any peer can belong to multiple channels, partitioned messaging prevents blocks from being disseminated to
peers that are not in the channel by applying message routing policies based on a peers’ channel subscriptions.
Note: 1. Security of point-to-point messages are handled by the peer TLS layer, and do not require signatures.
Peers are authenticated by their certificates, which are assigned by a CA. Although TLS certs are also used, it is the
peer certificates that are authenticated in the gossip layer. Ledger blocks are signed by the ordering service, and then
delivered to the leader peers on a channel.
2. Authentication is governed by the membership service provider for the peer. When the peer connects to the channel
for the first time, the TLS session binds with the membership identity. This essentially authenticates each peer to the
connecting peer, with respect to membership in the network and channel.
11.1 Endorsement
Endorsement architecture:
Question How many peers in the network need to endorse a transaction?
Answer The number of peers required to endorse a transaction is driven by the endorsement policy that
is specified at chaincode deployment time.
Question Does an application client need to connect to all peers?
Answer Clients only need to connect to as many peers as are required by the endorsement policy for the
chaincode.
397
hyperledger-fabricdocs Documentation, Release master
Fourth, you can restrict data access to certain roles in your organization, by building access control
into the chaincode logic.
Fifth, ledger data at rest can be encrypted via file system encryption on the peer, and data in-transit
is encrypted via TLS.
Question Do the orderers see the transaction data?
Answer No, the orderers only order transactions, they do not open the transactions. If you do not want
the data to go through the orderers at all, then utilize the private data feature of Fabric. Alternatively,
you can hash or encrypt the data in the client application before calling chaincode. If you encrypt
the data then you will need to provide a means to share the decryption keys.
Answer There are generally two ways to develop business contracts: the first way is to code individual
contracts into standalone instances of chaincode; the second way, and probably the more efficient
way, is to use chaincode to create decentralized applications that manage the life cycle of one or
multiple types of business contracts, and let end users instantiate instances of contracts within these
applications.
Question How do I create assets?
Answer Users can use chaincode (for business rules) and membership service (for digital tokens) to
design assets, as well as the logic that manages them.
There are two popular approaches to defining assets in most blockchain solutions: the stateless
UTXO model, where account balances are encoded into past transaction records; and the account
model, where account balances are kept in state storage space on the ledger.
Each approach carries its own benefits and drawbacks. This blockchain technology does not ad-
vocate either one over the other. Instead, one of our first requirements was to ensure that both
approaches can be easily implemented.
Question Which languages are supported for writing chaincode?
Answer Chaincode can be written in any programming language and executed in containers. Currently,
Golang, node.js and java chaincode are supported.
It is also possible to build Hyperledger Fabric applications using Hyperledger Composer.
Question Does the Hyperledger Fabric have native currency?
Answer No. However, if you really need a native currency for your chain network, you can develop your
own native currency with chaincode. One common attribute of native currency is that some amount
will get transacted (the chaincode defining that currency will get called) every time a transaction is
processed on its chain.
Question Where can I find what are the highlighted differences between releases?
Answer The differences between any subsequent releases are provided together with the Releases.
Question Where to get help for the technical questions not answered above?
Answer Please use StackOverflow.
Question I have an ordering service up and running and want to switch consensus algorithms. How
do I do that?
Answer This is explicitly not supported.
Question What is the orderer system channel?
Answer The orderer system channel (sometimes called ordering system channel) is the channel the or-
derer is initially bootstrapped with. It is used to orchestrate channel creation. The orderer system
channel defines consortia and the initial configuration for new channels. At channel creation time,
the organization definition in the consortium, the /Channel group’s values and policies, as well
as the /Channel/Orderer group’s values and policies, are all combined to form the new initial
channel definition.
11.6.1 Solo
11.6.2 Kafka
Question I’m trying to follow the BYFN example and get a “service unavailable” error, what should
I do?
Answer Check the ordering service’s logs. A “Rejecting deliver request because of consenter error” log
message is usually indicative of a connection problem with the Kafka cluster. Ensure that the Kafka
cluster is set up properly, and is reachable by the ordering service’s nodes.
11.6.3 BFT
Contributions Welcome!
We welcome contributions to Hyperledger in many forms, and there’s always plenty to do!
First things first, please review the Hyperledger Code of Conduct before participating. It is important that we keep
things civil.
There are many ways you can contribute to Hyperledger Fabric, both as a user and as a developer.
As a user:
• Making Feature/Enhancement Proposals
• Reporting bugs
• Help test an upcoming Epic on the release roadmap. Contact the Epic assignee via the Jira work item or on
RocketChat.
As a developer:
• If you only have a little time, consider picking up a “help-wanted” task, see Fixing issues and working stories.
• If you can commit to full-time development, either propose a new feature (see Making Feature/Enhancement
Proposals) and bring a team to implement it, or join one of the teams working on an existing Epic. If you
see an Epic that interests you on the release roadmap, contact the Epic assignee via the Jira work item or on
RocketChat.
In order to participate in the development of the Hyperledger Fabric project, you will need a Linux Foundation account.
You will need to use your LF ID to access to all the Hyperledger community tools, including Gerrit, Jira, RocketChat,
and the Wiki (for editing, only).
403
hyperledger-fabricdocs Documentation, Release master
Hyperledger Fabric is managed under an open governance model as described in our charter. Projects and sub-projects
are lead by a set of maintainers. New sub-projects can designate an initial set of maintainers that will be approved by
the top-level project’s existing maintainers when the project is first approved.
12.3.1 Maintainers
The Fabric project is lead by the project’s top level maintainers. The maintainers are responsible for reviewing and
merging all patches submitted for review, and they guide the overall technical direction of the project within the
guidelines established by the Hyperledger Technical Steering Committee (TSC).
The project’s maintainers will, from time-to-time, consider adding or removing a maintainer. An existing maintainer
can submit a change set to the MAINTAINERS.rst file. A nominated Contributor may become a Maintainer by a
majority approval of the proposal by the existing Maintainers. Once approved, the change set is then merged and
the individual is added to (or alternatively, removed from) the maintainers group. Maintainers may be removed by
explicit resignation, for prolonged inactivity (3 or more months), or for some infraction of the code of conduct or
by consistently demonstrating poor judgement. A maintainer removed for inactivity should be restored following a
sustained resumption of contributions and reviews (a month or more) demonstrating a renewed commitment to the
project.
The Fabric maintainers have settled on a quarterly (approximately) release cadence (see releases). We are also actively
considering adopting an LTS (long term support) release process, though the details of this are still being worked out
by the maintainers. Follow the discussion on the #fabric-maintainers channel in Chat.
First, take time to review JIRA to be sure that there isn’t already an open (or recently closed) proposal for the same
function. If there isn’t, to make a proposal we recommend that you open a JIRA Epic or Story, whichever seems to
best fit the circumstance and link or inline a “one pager” of the proposal that states what the feature would do and,
if possible, how it might be implemented. It would help also to make a case for why the feature should be added,
such as identifying specific use case(s) for which the feature is needed and a case for what the benefit would be
should the feature be implemented. Once the JIRA issue is created, and the “one pager” either attached, inlined in the
description field, or a link to a publicly accessible document is added to the description, send an introductory email to
the [email protected] mailing list linking the JIRA issue, and soliciting feedback.
Discussion of the proposed feature should be conducted in the JIRA issue itself, so that we have a consistent pattern
within our community as to where to find design discussion.
Getting the support of three or more of the Hyperledger Fabric maintainers for the new feature will greatly enhance
the probability that the feature’s related CRs will be included in a subsequent release.
The maintainers hold a bi-weekly meeting every other Wednesday at 9 am ET on Zoom. Please see the community
calendar for details.
The purpose of the maintainers meeting is to plan for and review the progress of releases, and to discuss the technical
and operational direction of the project and sub-projects.
New feature/enhancement proposals as described above should be presented to a maintainers meeting for considera-
tion, feedback and acceptance.
12.3.7 Communications
We use RocketChat for communication and Google Hangouts™ for screen sharing between developers. Our devel-
opment planning and prioritization is done in JIRA, and we take longer running discussions/decisions to the mailing
list.
Before we begin, if you haven’t already done so, you may wish to check that you have all the prerequisites installed
on the platform(s) on which you’ll be developing blockchain applications and/or operating Hyperledger Fabric.
If you are looking for something to work on, or need some expert assistance in debugging a problem or working out
a fix to an issue, our community is always eager to help. We hang out on Chat, IRC (#hyperledger on freenode.net)
and the mailing lists. Most of us don’t bite :grin: and will be glad to help. The only silly question is the one you don’t
ask. Questions are in fact a great way to help improve the project as they highlight where our documentation could be
clearer.
If you are a user and you have found a bug, please submit an issue using JIRA. Before you create a new JIRA issue,
please try to search the existing items to be sure no one else has previously reported it. If it has been previously
reported, then you might add a comment that you also are interested in seeing the defect fixed.
Note: If the defect is security-related, please follow the Hyperledger security bug reporting process.
If it has not been previously reported, create a new JIRA. Please try to provide sufficient information for someone
else to reproduce the issue. One of the project’s maintainers should respond to your issue within 24 hours. If not,
please bump the issue with a comment and request that it be reviewed. You can also post to the relevant Hyperledger
Fabric channel in Hyperledger Chat. For example, a doc bug should be broadcast to #fabric-documentation,
a database bug to #fabric-ledger, and so on. . .
If you just submitted a JIRA for a bug you’ve discovered, and would like to provide a fix, we would welcome that
gladly! Please assign the JIRA issue to yourself, then you can submit a change request (CR).
Note: If you need help with submitting your first CR, we have created a brief tutorial for you.
Review the issues list and find something that interests you. You could also check the “help-wanted” list. It is wise
to start with something relatively straight forward and achievable, and that no one is already assigned. If no one is
assigned, then assign the issue to yourself. Please be considerate and rescind the assignment if you cannot finish in a
reasonable time, or add a comment saying that you are still actively working the issue if you need a little more time.
Another way to contribute and learn about Hyperledger Fabric is to help the maintainers with the review of the CRs
that are open. Indeed maintainers have the difficult role of having to review all the CRs that are being submitted
and evaluate whether they should be merged or not. You can review the code and/or documentation changes, test the
changes, and tell the submitters and maintainers what you think. Once your review and/or test is complete just reply
to the CR with your findings, by adding comments and/or voting. A comment saying something like “I tried it on
system X and it works” or possibly “I got an error on system X: xxx ” will help the maintainers in their evaluation. As
a result, maintainers will be able to process CRs faster and everybody will gain from it.
Just browse through the open CRs on Gerrit to get started.
Next, try building the project in your local development environment to ensure that everything is set up correctly.
• One change at a time. Not five, not three, not ten. One and only one. Why? Because it limits the blast area
of the change. If we have a regression, it is much easier to identify the culprit commit than if we have some
composite change that impacts more of the code.
• Include a link to the JIRA story for the change. Why? Because a) we want to track our velocity to better judge
what we think we can deliver and when and b) because we can justify the change more effectively. In many
cases, there should be some discussion around a proposed change and we want to link back to that from the
change itself.
• Include unit and integration tests (or changes to existing tests) with every change. This does not mean just happy
path testing, either. It also means negative testing of any defensive code that it correctly catches input errors.
When you write code, you are responsible to test it and provide the tests that demonstrate that your change does
what it claims. Why? Because without this we have no clue whether our current code base actually works.
• Unit tests should have NO external dependencies. You should be able to run unit tests in place with go test
or equivalent for the language. Any test that requires some external dependency (e.g. needs to be scripted to
run another component) needs appropriate mocking. Anything else is not unit testing, it is integration testing by
definition. Why? Because many open source developers do Test Driven Development. They place a watch on
the directory that invokes the tests automagically as the code is changed. This is far more efficient than having
to run a whole build between code changes. See this definition of unit testing for a good set of criteria to keep
in mind for writing effective unit tests.
• Minimize the lines of code per CR. Why? Maintainers have day jobs, too. If you send a 1,000 or 2,000 LOC
change, how long do you think it takes to review all of that code? Keep your changes to < 200-300 LOC, if
possible. If you have a larger change, decompose it into multiple independent changes. If you are adding a
bunch of new functions to fulfill the requirements of a new capability, add them separately with their tests, and
then write the code that uses them to deliver the capability. Of course, there are always exceptions. If you add
a small change and then add 300 LOC of tests, you will be forgiven;-) If you need to make a change that has
broad impact or a bunch of generated code (protobufs, etc.). Again, there can be exceptions.
Note: Large change requests, e.g. those with more than 300 LOC are more likely than not going to receive a -2, and
you’ll be asked to refactor the change to conform with this guidance.
• Do not stack change requests (e.g. submit a CR from the same local branch as your previous CR) unless they are
related. This will minimize merge conflicts and allow changes to be merged more quickly. If you stack requests
your subsequent requests may be held up because of review comments in the preceding requests.
• Write a meaningful commit message. Include a meaningful 55 (or less) character title, followed by a blank line,
followed by a more comprehensive description of the change. Each change MUST include the JIRA identifier
corresponding to the change (e.g. [FAB-1234]). This can be in the title but should also be in the body of the
commit message. See the complete requirements for an acceptable change request.
Note: That Gerrit will automatically create a hyperlink to the JIRA item. e.g.
Finally, be responsive. Don’t let a change request fester with review comments such that it gets to a point that it
requires a rebase. It only further delays getting it merged and adds more work for you - to remediate the merge
conflicts.
Note: Each source file must include a license header for the Apache Software License 2.0. See the template of the
license header.
We have tried to make it as easy as possible to make contributions. This applies to how we handle the legal aspects of
contribution. We use the same approach—the Developer’s Certificate of Origin 1.1 (DCO)—that the Linux® Kernel
community uses to manage code contributions.
We simply ask that when submitting a patch for review, the developer must include a sign-off statement in the commit
message.
Here is an example Signed-off-by line, which indicates that the submitter accepts the DCO:
You can include this automatically when you commit a change to your local git repository using git commit -s.
12.6.1 Maintainers
Active Maintainers
Release Managers
Retired Maintainers
This document has been created to give further insight into the work in progress towards the Hyperledger Fabric v1
architecture based on the community roadmap. The requirements for the roadmap are being tracked in Jira.
It was determined to organize in sprints to better track and show a prioritized order of items to be implemented based
on feedback received. We’ve done this via boards. To see these boards and the priorities click on Boards -> Manage
Boards:
On this page you will see all the public (and restricted) boards that have been created. If you want to see the items
with current sprint focus, click on the boards where the column labeled Visibility is All Users and the column Board
type is labeled Scrum. For example the Board Name Consensus:
When you click on Consensus under Board name you will be directed to a page that contains the following columns:
The meanings to these columns are as follows:
• Backlog – list of items slated for the current sprint (sprints are defined in 2 week iterations), but are not currently
in progress
• In progress – items currently being worked by someone in the community.
• In Review – items waiting to be reviewed and merged in Gerrit
• Done – items merged and complete in the sprint.
If you want to see all items in the backlog for a given feature set, click on the stacked rows on the left navigation of
the screen:
This shows you items slated for the current sprint at the top, and all items in the backlog at the bottom. Items are listed
in priority order.
If there is an item you are interested in working on, want more information or have questions, or if there is an item that
you feel needs to be in higher priority, please add comments directly to the Jira item. All feedback and help is very
much appreciated.
Overview
Prior to the v1.0.0 release, the development environment utilized Vagrant running an Ubuntu image, which in turn
launched Docker containers as a means of ensuring a consistent experience for developers who might be working with
varying platforms, such as macOS, Windows, Linux, or whatever. Advances in Docker have enabled native support
on the most popular development platforms: macOS and Windows. Hence, we have reworked our build to take full
advantage of these advances. While we still maintain a Vagrant based approach that can be used for older versions of
macOS and Windows that Docker does not support, we strongly encourage that the non-Vagrant development setup
be used.
Note that while the Vagrant-based development setup could not be used in a cloud context, the Docker-based build
does support cloud platforms such as AWS, Azure, Google and IBM to name a few. Please follow the instructions for
Ubuntu builds, below.
Prerequisites
• Git client
• Go - version 1.11.x
• (macOS) Xcode must be installed
• Docker - 17.06.2-ce or later
• Docker Compose - 1.14.0 or later
• Pip
• (macOS) you may need to install gnutar, as macOS comes with bsdtar as the default, but the build uses some
gnutar flags. You can use Homebrew to install it as follows:
pip
Steps
Make sure you have properly setup your Host’s GOPATH environment variable. This allows for both building within
the Host and the VM.
In case you installed Go into a different location from the standard one your Go distribution assumes, make sure that
you also set GOROOT environment variable.
If you are running Windows, before running any git clone commands, run the following command.
If you continue with core.autocrlf set to true, the vagrant up command will fail with the error:
./setup.sh: /bin/bash^M: bad interpreter: No such file or directory
Since Hyperledger Fabric is written in Go, you’ll need to clone the source repository to your $GOPATH/src directory.
If your $GOPATH has multiple path components, then you will want to use the first one. There’s a little bit of setup
needed:
cd $GOPATH/src
mkdir -p github.com/hyperledger
cd github.com/hyperledger
Recall that we are using Gerrit for source control, which has its own internal git repositories. Hence, we will need
to clone from Gerrit. For brevity, the command is as follows:
Note: Of course, you would want to replace LFID with your own Linux Foundation ID.
If you are planning on using the Vagrant developer environment, the following steps apply. Again, we recommend
against its use except for developers that are limited to older versions of macOS and Windows that are not
supported by Docker for Mac or Windows.
cd $GOPATH/src/github.com/hyperledger/fabric/devenv
vagrant up
Go get coffee. . . this will take a few minutes. Once complete, you should be able to ssh into the Vagrant VM just
created.
vagrant ssh
Once inside the VM, you can find the source under $GOPATH/src/github.com/hyperledger/fabric. It
is also mounted as /hyperledger.
Once you have all the dependencies installed, and have cloned the repository, you can proceed to build and test
Hyperledger Fabric.
Notes
NOTE: Any time you change any of the files in your local fabric directory (under $GOPATH/src/github.com/
hyperledger/fabric), the update will be instantly available within the VM fabric directory.
NOTE: If you intend to run the development environment behind an HTTP Proxy, you need to configure the guest so
that the provisioning process may complete. You can achieve this via the vagrant-proxyconf plugin. Install with
vagrant plugin install vagrant-proxyconf and then set the VAGRANT_HTTP_PROXY and VA-
GRANT_HTTPS_PROXY environment variables before you execute vagrant up. More details are available here:
https://github.com/tmatilai/vagrant-proxyconf/
NOTE: The first time you run this command it may take quite a while to complete (it could take 30 minutes or more
depending on your environment) and at times it may look like it’s not doing anything. As long you don’t get any error
messages just leave it alone, it’s all good, it’s just cranking.
NOTE to Windows 10 Users: There is a known problem with vagrant on Windows 10 (see hashicorp/vagrant#6754).
If the vagrant up command fails it may be because you do not have the Microsoft Visual C++ Redistributable
package installed. You can download the missing package at the following address: http://www.microsoft.com/en-us/
download/details.aspx?id=8328
NOTE: The inclusion of the miekg/pkcs11 package introduces an external dependency on the ltdl.h header file dur-
ing a build of fabric. Please ensure your libtool and libltdl-dev packages are installed. Otherwise, you may get a
ltdl.h header missing error. You can download the missing package by command: sudo apt-get install -y
build-essential git make curl unzip g++ libtool.
The following instructions assume that you have already set up your development environment.
To build Hyperledger Fabric:
cd $GOPATH/src/github.com/hyperledger/fabric
make dist-clean all
cd $GOPATH/src/github.com/hyperledger/fabric
make unit-test
To run a subset of tests, set the TEST_PKGS environment variable. Specify a list of packages (separated by space),
for example:
export TEST_PKGS="github.com/hyperledger/fabric/core/ledger/..."
make unit-test
To run a specific test use the -run RE flag where RE is a regular expression that matches the test case name. To run
tests with verbose output use the -v flag. For example, to run the TestGetFoo test case, change to the directory
containing the foo_test.go and call/execute
go test -v -run=TestGetFoo
You must also run the Node.js unit tests to ensure that the Node.js client SDK is not broken by your changes. To run
the Node.js unit tests, follow the instructions here.
It is possible to build the project and run peers outside of Vagrant. Generally speaking, one has to ‘translate’ the
vagrant setup file to the platform of your choice.
Building on Z
To make building on Z easier and faster, this script is provided (which is similar to the setup file provided for vagrant).
This script has been tested only on RHEL 7.2 and has some assumptions one might want to re-visit (firewall settings,
development as root user, etc.). It is however sufficient for development in a personally-assigned VM instance.
To get started, from a freshly installed OS:
sudo su
yum install git
mkdir -p $HOME/git/src/github.com/hyperledger
cd $HOME/git/src/github.com/hyperledger
git clone http://gerrit.hyperledger.org/r/fabric
source fabric/devenv/setupRHELonZ.sh
From this point, you can proceed as described above for the Vagrant development environment.
cd $GOPATH/src/github.com/hyperledger/fabric
make peer unit-test
Development and build on Power (ppc64le) systems is done outside of vagrant as outlined here. For ease of setting
up the dev environment on Ubuntu, invoke this script as root. This script has been validated on Ubuntu 16.04 and
assumes certain things (like, development system has OS repositories in place, firewall setting etc) and in general can
be improvised further.
To get started on Power server installed with Ubuntu, first ensure you have properly setup your Host’s GOPATH
environment variable. Then, execute the following commands to build the fabric code:
mkdir -p $GOPATH/src/github.com/hyperledger
cd $GOPATH/src/github.com/hyperledger
git clone http://gerrit.hyperledger.org/r/fabric
sudo ./fabric/devenv/setupUbuntuOnPPC64le.sh
cd $GOPATH/src/github.com/hyperledger/fabric
make dist-clean all
Building on Centos 7
You will have to build CouchDB from source because there is no package available from the distribution. If you are
planning a multi-orderer arrangement, you will also need to install Apache Kafka from source. Apache Kafka includes
both Zookeeper and Kafka executables and supporting artifacts.
If you are not trying to build for docker, you only need the natives.
12.6.6 Configuration
CORE_PEER_LOGGING_LEVEL=CRITICAL peer
Contributions to the Hyperledger Fabric code base require a Linux Foundation account — follow the steps below to
create one if you don’t already have one.
Gerrit uses SSH to interact with your Git client. If you already have an SSH key pair, you can skip the part of this
section that explains how to generate one.
What follows explains how to generate an SSH key pair in a Linux environment — follow the equivalent steps on your
OS.
First, create an SSH key pair with the command:
Note: This will ask you for a password to protect the private key as it generates a unique key. Please keep this password
private, and DO NOT enter a blank password.
The generated SSH key pair can be found in the files ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub.
Next, add the private key in the id_rsa file to your key ring, e.g.:
ssh-add ~/.ssh/id_rsa
Finally, add the public key of the generated key pair to the Gerrit server, with the following steps:
1. Go to Gerrit.
2. Click on your account name in the upper right corner.
host gerrit.hyperledger.org
HostName gerrit.hyperledger.org
IdentityFile ~/.ssh/id_rsa_hyperledger_gerrit
User <LFID>
where <LFID> is your Linux Foundation ID and the value of IdentityFile is the name of the public key file you
generated.
Warning: Potential Security Risk! Do not copy your private key ~/.ssh/id_rsa. Use only the public ~/.ssh/
id_rsa.pub.
Once you’ve set up SSH as explained in the previous section, you can clone the source code repository with the
command:
You have now successfully checked out a copy of the source code to your local machine.
Follow these instructions to collaborate on Hyperledger Fabric through the Gerrit review system.
Please be sure that you are subscribed to the mailing list and of course, you can reach out on chat if you need help.
Gerrit assigns the following roles to users:
• Submitters: May submit changes for consideration, review other code changes, and make recommendations
for acceptance or rejection by voting +1 or -1, respectively.
• Maintainers: May approve or reject changes based upon feedback from reviewers voting +2 or -2, respectively.
• Builders: (e.g. Jenkins) May use the build automation infrastructure to verify the change.
Maintainers should be familiar with the review process. However, anyone is welcome to (and encouraged!) review
changes, and hence may find that document of value.
Git-review
There’s a very useful tool for working with Gerrit called git-review. This command-line tool can automate most of the
ensuing sections for you. Of course, reading the information below is also highly recommended so that you understand
what’s going on behind the scenes.
A comprehensive walk-through of Gerrit is beyond the scope of this document. There are plenty of resources available
on the Internet. A good summary can be found here. We have also provided a set of Best Practices that you may find
helpful.
Note: If you are cloning the fabric project repository, you will want to clone it to the $GOPATH/src/github.
com/hyperledger directory so that it will build, and so that you can use it with the Vagrant development environ-
ment.
5. Commit your code. For an in-depth discussion of creating an effective commit, please read this document on
submitting changes.
git commit -s -a
Submitting a Change
Note: Please review the guidelines for making and submitting a change.
Note: if you prefer, you can use the git-review tool instead of the following. e.g.
Add the following section to .git/config, and replace <USERNAME> with your gerrit id.
[remote "gerrit"]
url = ssh://<USERNAME>@gerrit.hyperledger.org:29418/fabric.git
fetch = +refs/heads/*:refs/remotes/gerrit/*
When you update your patch, you can commit with git commit --amend, and then repeat the git review
command.
If the command executes correctly, the output should look similar to this:
The gerrit server generates a link where the change can be tracked.
• Add: This button allows the change submitter to manually add names of people who should review a change;
start typing a name and the system will auto-complete based on the list of people registered and with access to
the system. They will be notified by email that you are requesting their input.
• Abandon: This button is available to the submitter only; it allows a committer to abandon a change and remove
it from the merge queue.
• Change-ID: This ID is generated by Gerrit (or system). It becomes useful when the review process determines
that your commit(s) have to be amended. You may submit a new version; and if the same Change-ID header
(and value) are present, Gerrit will remember it and present it as another version of the same change.
• Status: Currently, the example change is in review status, as indicated by “Needs Verified” in the upper-left
corner. The list of Reviewers will all emit their opinion, voting +1 if they agree to the merge, -1 if they disagree.
Gerrit users with a Maintainer role can agree to the merge or refuse it by voting +2 or -2 respectively.
Notifications are sent to the email address in your commit message’s Signed-off-by line. Visit your Gerrit dashboard,
to check the progress of your requests.
The history tab in Gerrit will show you the in-line comments and the author of the review.
Find all pending changes by clicking on the All --> Changes link in the upper-left corner, or open this link.
If you collaborate in multiple projects, you may wish to limit searching to the specific branch through the search bar
in the upper-right side.
Add the filter project:fabric to limit the visible changes to only those from Hyperledger Fabric.
List all current changes you submitted, or list just those changes in need of your input by clicking on My -->
Changes or open this link
Carefully review the following before submitting a change to the Hyperledger Fabric code base. These guidelines
apply to developers that are new to open source, as well as to experienced open source developers.
Change Requirements
This section contains guidelines for submitting code changes for review. For more information on how to submit a
change using Gerrit, please see Working with Gerrit.
All changes to Hyperledger Fabric are submitted as Git commits via Gerrit. Each commit must contain:
• a short and descriptive subject line that is 55 characters or fewer, followed by a blank line,
• a change description with the logic or reasoning for your changes, followed by a blank line,
• a Signed-off-by line, followed by a colon (Signed-off-by:), and
• a Change-Id identifier line, followed by a colon (Change-Id:). Gerrit won’t accept patches without this identifier.
A commit with the above details is considered well-formed.
Note: You don’t need to supply the Change-Id identifier for a new commit; this is added automatically by the
commit-msg Git hook associated with the repository. If you subsequently amend your commit and resubmit it,
using the same Change-Id value as the initial commit will guarantee that Gerrit will recognize that subsequent commit
as an amended commit with respect to the earlier one.
All changes and topics sent to Gerrit must be well-formed. In addition to the above mandatory content in a commit, a
commit message should include:
• what the change does,
• why you chose that approach, and
• how you know it works — for example, which tests you ran.
Commits must build cleanly when applied on top of each other, thus avoiding breaking bisectability. Each commit
should address a single identifiable JIRA issue and should be logically self-contained. For example, one commit
might fix whitespace issues, another commit might rename a function, while a third commit could change some code’s
functionality.
A well-formed commit is illustrated below in detail:
You can add more details here in several paragraphs, but please keep
each line less than 80 characters long.
Change-Id: IF7b6ac513b2eca5f2bab9728ebd8b7e504d3cebe1
Signed-off-by: Your Name <[email protected]>
The name in the Signed-off-by: line and your email must match the change authorship information. Make sure
your personal .gitconfig file is set up correctly.
When a change is included in the set to enable other changes, but it will not be part of the final set, please let the
reviewers know this.
To ensure stability of the code and limit possible regressions, we use a Continuous Integration (CI) process based on
Jenkins which triggers a build on several platforms and runs tests against every change request being submitted. It is
your responsibility to check that your CR passes these tests. No CR will ever be merged if it fails the tests and you
shouldn’t expect anybody to pay attention to your CRs until they pass the CI tests.
To check on the status of the CI process, simply look at your CR on Gerrit, following the URL that was given to you
as the result of the push in the previous step. The History section at the bottom of the page will display a set of actions
taken by “Hyperledger Jobbuilder” corresponding to the CI process being executed.
Upon completion, “Hyperledger Jobbuilder” will add to the CR a +1 vote if successful and a -1 vote otherwise.
In case of failure, explore the logs linked from the CR History. If you spot a problem with your CR, amend your
commit and push it to update it, which will automatically kick off the CI process again.
If you see nothing wrong with your CR, it might be that the CI process simply failed for some reason unrelated to your
change. In that case you may want to restart the CI process by posting a reply to your CR with the simple content
“reverify”. Check the CI management page for additional information and options on this.
3. Click on a filename to review it. Select the code base to differentiate against. The default is Base and it will
generally be what is needed.
4. The review page presents the changes made to the file. At the top of the review, the presentation shows some
general navigation options. Navigate through the patch set using the arrows on the top right corner. It is possible
to go to the previous or next file in the set or to return to the main change screen. Click on the yellow sticky pad
to add comments to the whole file.
The focus of the page is on the comparison window. The changes made are presented in green on the right versus the
base version on the left. Double click to highlight the text within the actual change to provide feedback on a specific
section of the code. Press c once the code is highlighted to add comments to that section.
5. After adding the comment, it is saved as a Draft.
6. Once you have reviewed all files and provided feedback, click the green up arrow at the top right to return to the
main change page. Click the Reply button, write some final comments, and submit your score for the patch
set. Click Post to submit the review of each reviewed file, as well as your final comment and score. Gerrit
sends an email to the change-submitter and all listed reviewers. Finally, it logs the review for future reference.
All individual comments are saved as Draft until the Post button is clicked.
This document presents some best practices to help you use Gerrit more effectively. The intent is to show how content
can be submitted easily. Use the recommended practices to reduce your troubleshooting time and improve participation
in the community.
Visit Gerrit then select Projects --> List --> SELECT-PROJECT --> Branches. Select the branch
that interests you, click on gitweb located on the right-hand side. Now, gitweb loads your selection on the Git web
interface and redirects appropriately.
Watching a Project
Visit Gerrit, then select Settings, located on the top right corner. Select Watched Projects and then add any
projects that interest you.
Commit Messages
Gerrit follows the Git commit message format. Ensure the headers are at the bottom and don’t contain blank lines
between one another. The following example shows the format and content expected in a commit message:
Brief (no more than 55 chars) one line description.
Elaborate summary of the changes made referencing why (motivation), what was changed and how it was tested. Note
also any changes to documentation made to remain consistent with the code changes, wrapping text at 72 chars/line.
Jira: FAB-100
Change-Id: LONGHEXHASH
Signed-off-by: Your Name [email protected]
AnotherExampleHeader: An Example of another Value
The Gerrit server provides a precommit hook to autogenerate the Change-Id which is one time use.
Recommended reading: How to Write a Git Commit Message
Topic branches
Topic branches are temporary branches that you push to commit a set of logically-grouped dependent commits:
To push changes from REMOTE/master tree to Gerrit for being reviewed as a topic in TopicName use the following
command as an example:
$ git push REMOTE HEAD:refs/for/master/TopicName
The topic will show up in the review UI and in the Open Changes List. Topic branches will disappear from the
master tree when its content is merged.
You may decide whether or not you’d like the cover letter to appear in the history.
1. To make a cover letter that appears in the history, use this command:
Edit the commit message, this message then becomes the cover letter. The command used doesn’t change any files in
the source tree.
2. To make a cover letter that doesn’t appear in the history follow these steps:
• Put the empty commit at the end of your commits list so it can be ignored
without having to rebase.
• Now add your commits
• Finally, push the commits to a topic branch. The following command is an example:
If you already have commits but you want to set a cover letter, create an empty commit for the cover letter and move
the commit so it becomes the last commit on the list. Use the following command as an example:
Be careful to uncomment the commit before moving it. #Commits is the sum of the commits plus your new cover
letter.
In the review UI, on the top right corner, the Download link provides a list of commands and hyperlinks to checkout
or download diffs or files.
We recommend the use of the git review plugin. The steps to install git review are beyond the scope of this document.
Refer to the git review documentation for the installation process.
To check out a specific change using Git, the following command usually works:
If you don’t have Git-review installed, the following commands will do the same thing:
For example, for the 4th version of change 2464, NN is the first two digits (24):
You can use draft branches to add specific reviewers before you publishing your change. The Draft Branches are
pushed to refs/drafts/master/TopicName
The next command ensures a local branch is created:
The next command pushes your change to the drafts branch under TopicName:
You can create your own branches to develop features. The branches are pushed to the refs/sandbox/
USERNAME/BRANCHNAME location.
These commands ensure the branch is created in Gerrit’s server.
During the review process, you might be asked to update your change. It is possible to submit multiple versions of the
same change. Each version of the change is called a patch set.
Always maintain the Change-Id that was assigned. For example, there is a list of commits, c0. . . c7, which were
submitted as a topic branch:
c0
...
c7
After you get reviewers’ feedback, there are changes in c3 and c4 that must be fixed. If the fix requires rebasing,
rebasing changes the commit Ids, see the rebasing section for more information. However, you must keep the same
Change-Id and push the changes again:
This new push creates a patches revision, your local history is then cleared. However you can still access the history
of your changes in Gerrit on the review UI section, for each change.
It is also permitted to add more commits when pushing new versions.
Rebasing
Rebasing is usually the last step before pushing changes to Gerrit; this allows you to make the necessary Change-Ids.
The Change-Ids must be kept the same.
• squash: mixes two or more commits into a single one.
• reword: changes the commit message.
• edit: changes the commit content.
• reorder: allows you to interchange the order of the commits.
• rebase: stacks the commits on top of the master.
Before pushing a rebase to your master, ensure that the history has a consecutive order.
For example, your REMOTE/master has the list of commits from a0 to a4; Then, your changes c0. . . c7 are on top
of a4; thus:
a0
a1
a2
a3
a4
c0
c1
...
c7
If REMOTE/master receives commits a5, a6 and a7. Pull with a rebase as follows:
Use these commands to change the configuration of Git in order to produce better logs:
The command above sets the log to abbreviate the commits’ hash.
The command above sets the abbreviation length to the last 5 characters of the hash.
The command above avoids the insertion of an unnecessary line before the Author line.
To make these configuration changes specifically for the current Git user, you must add the path option --global to
config as follows:
Coding Golang
We code in Go™ and strictly follow the best practices and will not accept any deviations. You must run the following
tools against your Go code and fix all errors and warnings: - golint - go vet - goimports
API Documentation
The API documentation for Hyperledger Fabric’s Golang APIs is available in GoDoc.
If you modify any .proto files, run the following command to generate/update the respective .pb.go files.
cd $GOPATH/src/github.com/hyperledger/fabric
make protos
Hyperledger Fabric uses Go Vendoring for package management. This means that all required packages reside in the
$GOPATH/src/github.com/hyperledger/fabric/vendor folder. Go will use packages in this folder
instead of the GOPATH when the go install or go build commands are executed. To manage the packages in
the vendor folder, we use dep.
Glossary
Terminology is important, so that all Hyperledger Fabric users and developers agree on what we mean by each specific
term. What is a smart contract for example. The documentation will reference the glossary as needed, but feel free to
read the entire thing in one sitting if you like; it’s pretty enlightening!
Used by gossip to make sure peers in different organizations know about each other.
When a configuration block that contains an update to the anchor peers is committed, peers reach out to the anchor
peers and learn from them about all of the peers known to the anchor peer(s). Once at least one peer from each
organization has contacted an anchor peer, the anchor peer learns about every peer in the channel. Since gossip
communication is constant, and because peers always ask to be told about the existence of any peer they don’t know
about, a common view of membership can be established for a channel.
For example, let’s assume we have three organizations—A, B, C— in the channel and a single anchor
peer—peer0.orgC— defined for organization C. When peer1.orgA (from organization A) contacts peer0.orgC, it will
tell it about peer0.orgA. And when at a later time peer1.orgB contacts peer0.orgC, the latter would tell the former
about peer0.orgA. From that point forward, organizations A and B would start exchanging membership information
directly without any assistance from peer0.orgC.
As communication across organizations depends on gossip in order to work, there must be at least one anchor peer
defined in the channel configuration. It is strongly recommended that every organization provides its own set of anchor
peers for high availability and redundancy.
13.2 ACL
An ACL, or Access Control List, associates access to specific peer resources (such as system chaincode APIs or event
services) to a Policy (which specifies how many and what types of organizations or roles are required). The ACL is
part of a channel’s configuration. It is therefore persisted in the channel’s configuration blocks, and can be updated
using the standard configuration update mechanism.
427
hyperledger-fabricdocs Documentation, Release master
An ACL is formatted as a list of key-value pairs, where the key identifies the resource whose access we
wish to control, and the value identifies the channel policy (group) that is allowed to access it. For example
lscc/GetDeploymentSpec: /Channel/Application/Readers defines that the access to the life cy-
cle chaincode GetDeploymentSpec API (the resource) is accessible by identities which satisfy the /Channel/
Application/Readers policy.
A set of default ACLs is provided in the configtx.yaml file which is used by configtxgen to build channel config-
urations. The defaults can be set in the top level “Application” section of configtx.yaml or overridden on a per
profile basis in the “Profiles” section.
13.3 Block
13.4 Chain
13.5 Chaincode
See Smart-Contract.
13.6 Channel
13.7 Commit
Fig. 3: Channel C connects application A1,
Each Peer on a channel validates ordered blocks of transactions and peer P2 and ordering service O1.
then commits (writes/appends) the blocks to its replica of the channel
Ledger. Peers also mark each transaction in each block as valid or
invalid.
Concurrency Control Version Check is a method of keeping state in sync across peers on a channel. Peers execute
transactions in parallel, and before commitment to the ledger, peers check that the data read at execution time has
not changed. If the data read for the transaction has changed between execution time and commitment time, then a
Concurrency Control Version Check violation has occurred, and the transaction is marked as invalid on the ledger and
values are not updated in the state database.
Contains the configuration data defining members and policies for a system chain (ordering service) or channel. Any
configuration modifications to a channel or overall network (e.g. a member leaving or joining) will result in a new
configuration block being appended to the appropriate chain. This block will contain the contents of the genesis block,
plus the delta.
13.10 Consensus
A broader term overarching the entire transactional flow, which serves to generate an agreement on the order and to
confirm the correctness of the set of transactions constituting a block.
13.11 Consortium
A consortium is a collection of non-orderer organizations on the blockchain network. These are the organizations that
form and join channels and that own peers. While a blockchain network can have multiple consortia, most blockchain
networks have a single consortium. At channel creation time, all organizations added to the channel must be part of a
consortium. However, an organization that is not defined in a consortium may be added to an existing channel.
See World-State.
Hyperledger Fabric supports the addition/removal of members, peers, and ordering service nodes, without compro-
mising the operationality of the overall network. Dynamic membership is critical when business relationships adjust
and entities need to be added/removed for various reasons.
13.14 Endorsement
Refers to the process where specific peer nodes execute a chaincode transaction and return a proposal response to the
client application. The proposal response includes the chaincode execution response message, results (read set and
write set), and events, as well as a signature to serve as proof of the peer’s chaincode execution. Chaincode applications
have corresponding endorsement policies, in which the endorsing peers are specified.
Defines the peer nodes on a channel that must execute transactions attached to a specific chaincode application, and
the required combination of responses (endorsements). A policy could require that a transaction be endorsed by a
minimum number of endorsing peers, a minimum percentage of endorsing peers, or by all endorsing peers that are
assigned to a specific chaincode application. Policies can be curated based on the application and the desired level of
resilience against misbehavior (deliberate or not) by the endorsing peers. A transaction that is submitted must satisfy
the endorsement policy before being marked as valid by committing peers. A distinct endorsement policy for install
and instantiate transactions is also required.
Hyperledger Fabric CA is the default Certificate Authority component, which issues PKI-based certificates to net-
work member organizations and their users. The CA issues one root certificate (rootCert) to each member and one
enrollment certificate (ECert) to each authorized user.
The configuration block that initializes the ordering service, or serves as the first block on a chain.
The gossip data dissemination protocol performs three functions: 1) manages peer discovery and channel membership;
2) disseminates ledger data across all peers on the channel; 3) syncs ledger state across all peers on the channel. Refer
to the Gossip topic for more details.
13.19 Initialize
13.20 Install
13.21 Instantiate
The process of starting and initializing a chaincode application on a specific channel. After instantiation, peers that
have the chaincode installed can accept chaincode invocations.
13.22 Invoke
Used to call chaincode functions. A client application invokes chaincode by sending a transaction proposal to a peer.
The peer will execute the chaincode and return an endorsed proposal response to the client application. The client
application will gather enough proposal responses to satisfy an endorsement policy, and will then submit the trans-
action results for ordering, validation, and commit. The client application may choose not to submit the transaction
results. For example if the invoke only queried the ledger, the client application typically would not submit the read-
only transaction, unless there is desire to log the read on the ledger for audit purpose. The invoke includes a channel
identifier, the chaincode function to invoke, and an array of arguments.
Each Organization can own multiple peers on each channel that they subscribe to. One or more of these peers should
serve as the leading peer for the channel, in order to communicate with the network ordering service on behalf of the
organization. The ordering service delivers blocks to the leading peer(s) on a channel, who then distribute them to
other peers within the same organization.
13.24 Ledger
A ledger consists of two distinct, though related, parts – a “blockchain” and the “state
database”, also known as “world state”. Unlike other ledgers, blockchains are immutable
– that is, once a block has been added to the chain, it cannot be changed. In contrast, the
“world state” is a database containing the current value of the set of key-value pairs that
have been added, modified or deleted by the set of validated and committed transactions in
the blockchain.
It’s helpful to think of there being one logical ledger for each channel in the network. In
reality, each peer in a channel maintains its own copy of the ledger – which is kept consistent Fig. 4: A Ledger, ‘L’
with every other peer’s copy through a process called consensus. The term Distributed
Ledger Technology (DLT) is often associated with this kind of ledger – one that is logically
singular, but has many identical copies distributed across a set of network nodes (peers and the ordering service).
13.25 Member
See Organization.
The Membership Service Provider (MSP) refers to an abstract component of the sys-
tem that provides credentials to clients, and peers for them to participate in a Hyper-
ledger Fabric network. Clients use these credentials to authenticate their transactions,
and peers use these credentials to authenticate transaction processing results (endorse-
ments). While strongly connected to the transaction processing components of the
systems, this interface aims to have membership services components defined, in such
a way that alternate implementations of this can be smoothly plugged in without mod-
ifying the core of transaction processing components of the system.
A defined collective of nodes that orders transactions into a block. The ordering service exists independent of the
peer processes and orders transactions on a first-come-first-serve basis for all channel’s on the network. The ordering
service is designed to support pluggable implementations beyond the out-of-the-box SOLO and Kafka varieties. The
ordering service is a common binding for the overall network; it contains the cryptographic identity material tied to
each Member.
13.29 Organization
Also known as “members”, organizations are invited to join the blockchain network by a
blockchain service provider. An organization is joined to a network by adding its Mem-
bership Service Provider (MSP) to the network. The MSP defines how other members of
the network may verify that signatures (such as those over transactions) were generated by a
valid identity, issued by that organization. The particular access rights of identities within an
MSP are governed by policies which are also agreed upon when the organization is joined
to the network. An organization can be as large as a multi-national corporation or as small
as an individual. The transaction endpoint of an organization is a Peer. A collection of
organizations form a Consortium. While all of the organizations on a network are members, Fig. 6: An organiza-
not every organization will be part of a consortium. tion, ‘ORG’
13.30 Peer
A network entity that maintains a ledger and runs chaincode containers in order to perform
read/write operations to the ledger. Peers are owned and maintained by members.
13.31 Policy
Policies are expressions composed of properties of digital identities, for example: Org1. Fig. 7: A peer, ‘P’
Peer OR Org2.Peer. They are used to restrict access to resources on a blockchain
network. For instance, they dictate who can read from or write to a channel, or who can
use a specific chaincode API via an ACL. Policies may be defined in configtx.yaml
prior to bootstrapping an ordering service or creating a channel, or they can be specified
when instantiating chaincode on a channel. A default set of policies ship in the sample
configtx.yaml which will be appropriate for most networks.
Confidential data that is stored in a private database on each authorized peer, logically separate from the channel ledger
data. Access to this data is restricted to one or more organizations on a channel via a private data collection definition.
Unauthorized organizations will have a hash of the private data on the channel ledger as evidence of the transaction
data. Also, for further privacy, hashes of the private data go through the Ordering-Service, not the private data itself,
so this keeps private data confidential from Orderer.
Used to manage confidential data that two or more organizations on a channel want to keep private from other orga-
nizations on that channel. The collection definition describes a subset of organizations on a channel entitled to store a
set of private data, which by extension implies that only these organizations can transact with the private data.
13.34 Proposal
A request for endorsement that is aimed at specific peers on a channel. Each proposal is either an instantiate or an
invoke (read/write) request.
13.35 Query
A query is a chaincode invocation which reads the ledger current state but does not write to the ledger. The chaincode
function may query certain keys on the ledger, or may query for a set of keys on the ledger. Since queries do not change
ledger state, the client application will typically not submit these read-only transactions for ordering, validation, and
commit. Although not typical, the client application can choose to submit the read-only transaction for ordering,
validation, and commit, for example if the client wants auditable proof on the ledger chain that it had knowledge of
specific ledger state at a certain point in time.
The Hyperledger Fabric client SDK provides a structured environment of libraries for developers to write and test
chaincode applications. The SDK is fully configurable and extensible through a standard interface. Components,
including cryptographic algorithms for signatures, logging frameworks and state stores, are easily swapped in and
out of the SDK. The SDK provides APIs for transaction processing, membership services, node traversal and event
handling.
Currently, the two officially supported SDKs are for Node.js and Java, while three more – Python, Go and REST – are
not yet official but can still be downloaded and tested.
A smart contract is code – invoked by a client application external to the blockchain network – that manages access
and modifications to a set of key-value pairs in the World State. In Hyperledger Fabric, smart contracts are referred to
as chaincode. Smart contract chaincode is installed onto peer nodes and instantiated to one or more channels.
Current state data is stored in a state database for efficient reads and queries from chaincode. Supported databases
include levelDB and couchDB.
Contains a configuration block defining the network at a system level. The system chain lives within the ordering
service, and similar to a channel, has an initial configuration containing information such as: MSP information,
policies, and configuration details. Any change to the overall network (e.g. a new org joining or a new ordering
node being added) will result in a new configuration block being added to the system chain.
The system chain can be thought of as the common binding for a channel or group of channels. For instance, a
collection of financial institutions may form a consortium (represented through the system chain), and then proceed to
create channels relative to their aligned and varying business agendas.
13.40 Transaction
Invoke or instantiate results that are submitted for ordering, validation, and commit. Invokes
are requests to read/write data from the ledger. Instantiate is a request to start and initialize
a chaincode on a channel. Application clients gather invoke or instantiate responses from
endorsing peers and package the results and endorsements into a transaction that is submitted
for ordering, validation, and commit.
the world state provides direct access to the latest value of these keys rather than having to
calculate them by traversing the entire transaction log. The world state will change every
time the value of a key changes (for example, when the ownership of a car – the “key” – is
transferred from one owner to another – the “value”) or when a new key is added (a car is created). As a result, the
world state is critical to a transaction flow, since the current state of a key-value pair must be known before it can be
changed. Peers commit the latest values to the ledger world state for each valid transaction included in a processed
block.
Releases
437
hyperledger-fabricdocs Documentation, Release master
We try to maintain a comprehensive set of documentation for various audiences. However, we realize that often there
are questions that remain unanswered. For any technical questions relating to Hyperledger Fabric not answered here,
please use StackOverflow. Another approach to getting your questions answered to send an email to the mailing
list ([email protected]), or ask your questions on RocketChat (an alternative to Slack) on the
#fabric or #fabric-questions channel.
Note: Please, when asking about problems you are facing tell us about the environment in which you are experiencing
those problems including the OS, which version of Docker you are using, etc.
439
hyperledger-fabricdocs Documentation, Release master
Status
Hyperledger Fabric is in the Active state. For more information on the history of this project see our wiki page.
Information on what Active entails can be found in the Hyperledger Project Lifecycle document.
Note: If you have questions not addressed by this documentation, or run into issues with any of the tutorials, please
visit the Still Have Questions? page for some tips on where to find additional help.
441