Docs.git

GitHub - mojaloop/docs: Cross-repo documentation, end to end scenarios, and architecture
Repo URL: https://github.com/mojaloop/docs.git
Edited by:
Cover image: Cover image
Share this using: email, Google+, Twitter, Facebook.
Exports: EPUB | MOBI

1 Central Directory

1.1 Component Architecture

Central Directory Block Diagram

1.2 End User Lookup

End User lookup sequence diagram

1.3 Directory Endpoints


In this guide, we’ll walk through the different central directory endpoints:

The different endpoints often deal with these data structures:

Information about various errors returned can be found here:


1.4 Endpoints

1.4.1 Register a DFSP

This endpoint allows a DFSP to be registered to use the central directory.

1.4.1.1 HTTP Request

POST http://central-directory/commands/register

1.4.1.2 Authentication

Type Description
HTTP Basic The username and password are admin:admin

1.4.1.3 Headers

Field Type Description
Content-Type String Must be set to application/json

1.4.1.4 Request body

Field Type Description
name String The name of the created DFSP
shortName String The shortName of the created DFSP
providerUrl String The url reference for the DFSP

1.4.1.5 Response 201 Created

Field Type Description
Object DFSP The DFSP object as saved

1.4.1.6 Request

POST http://central-directory/commands/register HTTP/1.1
Content-Type: application/json
{
  "name": "The first DFSP",
  "shortName": "dfsp1",
  "providerUrl": "http://url.com"
}

1.4.1.7 Response

HTTP/1.1 201 CREATED
Content-Type: application/json
{
  "name": "The first DFSP",
  "shortName": "dfsp1",
  "providerUrl": "http://url.com",
  "key": "dfsp_key",
  "secret": "dfsp_secret"
}

1.4.1.8 Errors (4xx)

| Field | Description |
| —– | ———– |
| AlreadyExistsError | The DFSP already exists (determined by name) |

{
  "id": "AlreadyExistsError",
  "message": "The DFSP already exists (determined by name)"
}

1.4.2 Get identifier types

This endpoint allows retrieval of the identifier types supported by the central directory.

1.4.2.1 HTTP Request

GET http://central-directory/identifier-types

1.4.2.2 Authentication

Type Description
HTTP Basic The username and password are the key and secret of a registered DFSP, ex dfsp1:dfsp1

1.4.2.3 Response 200 OK

Field Type Description
Object Array List of supported Identifier Type objects

1.4.2.4 Request

GET http://central-directory/identifier-types HTTP/1.1

1.4.2.5 Response

HTTP/1.1 200 OK
[
  {
    "identifierType": "eur",
    "description": "Central end user registry"
  }
]

1.4.3 Register an identifier

This endpoint allows a DFSP to add an identifier associated with their account. When the identifier is retrieved from the Lookup resource by identifier endpoint, the url registered with the DFSP will be returned.

1.4.3.1 HTTP Request

POST http://central-directory/resources

1.4.3.2 Authentication

Type Description
HTTP Basic The key and secret for the DFSP

1.4.3.3 Headers

Field Type Description
Content-Type String Must be set to application/json

1.4.3.4 Request body

Field Type Description
identifier String The identifier type and identifier to be created, separated by a colon
preferred String (optional) Sets the identifier as preferred, can either be true or false.

Preferred will default to true if it is the first DFSP added for this identifier, and will default to false if another DFSP already has been added.

If the current DFSP being updated is preferred and the preferred value is set to false, an error will be thrown.

1.4.3.5 Response 201 Created

Field Type Description
Object Resource The newly-created Resource object as saved

1.4.3.6 Request

POST http://central-directory/resources HTTP/1.1
Content-Type: application/json
{
  "identifier": "eur:dfsp123",
  "preferred": "true"
}

1.4.3.7 Response

HTTP/1.1 201 CREATED
Content-Type: application/json
{
  "name": "The First DFSP",
  "providerUrl": "http://dfsp/users/1",
  "shortName": "dsfp1",
  "preferred": "true",
  "registered": "true"
}

1.4.3.8 Errors (4xx)

| Field | Description |
| —– | ———– |
| AlreadyExistsError | The identifier has already been registered by this DFSP |

{
  "id": "AlreadyExistsError",
  "message": "The identifier has already been registered by this DFSP"
}

1.4.4 Lookup resource by identifier

This endpoint allows retrieval of a URI that will return customer information by supplying an identifier and identifier type.

1.4.4.1 HTTP Request

GET http://central-directory/resources?identifier={identifierType:identifier}

1.4.4.2 Authentication

Type Description
HTTP Basic The username and password are the key and secret of a registered DFSP, ex dfsp1:dfsp1

1.4.4.3 Query Params

Field Type Description
identifier String Valid identifier type and identifier separated with a colon

1.4.4.4 Response 200 OK

Field Type Description
Object Array An array of Resource objects retrieved

The returned array will contain one DFSP with preferred set to true. All others should be set to false.

1.4.4.5 Request

GET http://central-directory/resources?identifier=eur:1234 HTTP/1.1

1.4.4.6 Response

HTTP/1.1 200 OK
[
  {
    "name": "The First DFSP",
    "providerUrl": "http://dfsp/users/1",
    "shortName": "dsfp1",
    "preferred": "true",
    "registered": "true"
  },
  {
    "name": "The Second DFSP",
    "providerUrl": "http://dfsp/users/2",
    "shortName": "dsfp2",
    "preferred": "false",
    "registered": "false"
  }
]

1.4.4.7 Errors (4xx)

| Field | Description |
| —– | ———– |
| NotFoundError | The requested resource could not be found. |

{
  "id": "NotFoundError",
  "message": "The requested resource could not be found."
}

1.4.5 Get directory metadata

Returns metadata associated with the directory

1.4.5.1 HTTP Request

GET http://central-directory

1.4.5.2 Response 200 OK

Field Type Description
Metadata Object The Metadata object for the directory

1.4.5.3 Request

GET http://central-directory HTTP/1.1

1.4.5.4 Response

HTTP/1.1 200 OK
{
  "directory": "http://central-directory",
  "urls": {
    "health": "http://central-directory/health",
    "identifier_types": "http://central-directory/identifier-types",
    "resources": "http://central-directory/resources",
    "register_identifier": "http://central-directory/resources"
  }
}

1.4.5.5 Directory Health

Get the current status of the service

1.4.5.5.1 HTTP Request

GET http://central-directory/health

1.4.5.5.2 Response 200 OK
Field Type Description
status String The status of the ledger, OK if the service is working
1.4.5.5.3 Request
GET http://central-directory/health HTTP/1.1
1.4.5.5.4 Response
HTTP/1.1 200 OK
{
  "status": "OK"
}

1.5 Data Structures

1.5.1 Resource Object

A resource represents the information returned about an identifier and identifier type.

A resource object can have the following fields:

Name Type Description
name String Name of the DFSP
providerUrl URI A URI that can be called to get more information about the customer
shortName String Shortened name for the DFSP
preferred String Details if the DFSP is set as preferred, can either be true or false
registered String Returns true if DFSP is registered for the identifier, false if defaulted

1.5.2 DFSP Object

Represents a DFSP that has registered with the central directory.

Some fields are Read-only, meaning they are set by the API and cannot be modified by clients. A DFSP object can have the following fields:

Name Type Description
name String The name of the created DFSP
shortName String The shortName of the created DFSP
providerUrl String The URL for the DFSP
key String Key used to authenticate with protected endpoints. Becomes the username for Basic Auth. Currently the same value as the name field
secret String Secret used to authenticate with protected endpoints. Currently the same value as the name field

1.5.3 Identifier Type Object

Represents an identifier type that is supported by the central directory.

An identifier type object can have the following fields:

Name Type Description
identifierType String Unique name of the identifier type
description String Description of the identifier type

1.5.4 Metadata Object

The central directory will return a metadata object about itself allowing client’s to configure themselves properly.

A metadata object can have the following fields:

Name Type Description
directory URI The directory that generated the metadata
urls Object Paths to other methods exposed by this directory. Each field name is short name for a method and the value is the path to that method.

1.6 Error information

This section identifies the potential errors returned and the structure of the response.

An error object can have the following fields:

Name Type Description
id String An identifier for the type of error
message String A message describing the error that occurred
validationErrors Array Optional An array of validation errors
validationErrors[].message String A message describing the validation error
validationErrors[].params Object An object containing the field that caused the validation error
validationErrors[].params.key String The name of the field that caused the validation error
validationErrors[].params.value String The value that caused the validation error
validationErrors[].params.child String The name of the child field
HTTP/1.1 404 Not Found
Content-Type: application/json
{
  "id": "InvalidQueryParameterError",
  "message": "Error validating one or more query parameters",
  "validationErrors": [
    {
      "message": "'0' is not a registered identifierType",
      "params": {
        "key": "identifierType",
        "value": "0"
      }
    }
  ]
}

2 Central Directory API

The central directory is a system that allows DFSPs to retrieve a URI that will return customer information by supplying an identifier and identifier type.

2.0.1 Resource Object<a name=“resource_object”></a>

A resource represents the information returned about an identifier and identifier type.

A resource object can have the following fields:

Name Type Description
spspReceiver URI A URI that can be called to get more information about the customer

2.0.2 DFSP Object<a name=“dfsp_object”></a>

Represents a DFSP that has registered with the central directory.

Some fields are Read-only, meaning they are set by the API and cannot be modified by clients. A DFSP object can have the following fields:

Name Type Description
name String Unique name of the DFSP
key String Optional, Read-only Key to use when authenticating, currently the same value as the name field
secret String Optional, Read-only Secret to use when authenticating, currently the same value as the name field

2.0.3 Identifier Type Object<a name=“identifier_type_object”></a>

Represents an identifier type that is supported by the central directory.

An identifier type object can have the following fields:

Name Type Description
identifierType String Unique name of the identifier type
description String Description of the identifier type

2.0.4 Metadata Object<a name=“metadata_object”></a>

The central directory will return a metadata object about itself allowing client’s to configure themselves properly.

A metadata object can have the following fields:

Name Type Description
directory URI The directory that generated the metadata
urls Object Paths to other methods exposed by this directory. Each field name is short name for a method and the value is the path to that method.

2.0.5 Lookup resource by identifier<a name=“lookup_resource”></a>

This endpoint allows retrieval of a URI that will return customer information by supplying and identifier and identifier type.

http://central-directory/resources?identifierType=:type&identifier=:identifier
GET http://central-directory/resources/?identifierType=test&identifier=1 HTTP/1.1

2.0.5.1 Authentication

Type Description
HTTP Basic The username and password are the key and secret of a registered DFSP; for example, dfsp1:dfsp1

2.0.5.2 Query Params

Field Type Description
identifierType String Valid identifier type
identifier String Identifier for the user

2.0.5.3 Response 200 OK

Field Type Description
Object Resource The Resource object retrieved
HTTP/1.1 200 OK
{
  "spspReceiver": "http://dfsp/users/2"
}

2.0.5.4 Errors (4xx)

Field Description
NotFoundError The requested resource could not be found

2.0.6 Register a DFSP<a name=“register_dfsp”></a>

This endpoint allows a DFSP to be registered to use the central directory.

http://central-directory/commands/register
POST http://central-directory/commands/register HTTP/1.1
Content-Type: application/json
{
  "name": "dfsp1"
}

2.0.6.1 Authentication

Type Description
HTTP Basic The username and password are admin:admin

2.0.6.2 Headers

Field Type Description
Content-Type String Must be set to application/json

2.0.6.3 Request body

Field Type Description
Object DFSP A DFSP object to be created

2.0.6.4 Response 201 Created

Field Type Description
Object DFSP The newly-created DFSP object as saved
HTTP/1.1 201 CREATED
Content-Type: application/json
{
  "name": "dfsp1",
  "key": "dfsp1",
  "secret": "dfsp1"
}
2.0.6.4.1 Errors (4xx)
Field Description
AlreadyExistsError The DFSP already exists (determined by name)

2.0.7 Get identifier types<a name=“get_identifier_types”></a>

This endpoint allows retrieval of the identifier types supported by the central directory.

http://central-directory/identifier-types
GET http://central-directory/identifier-types HTTP/1.1

2.0.7.1 Authentication

Type Description
HTTP Basic The username and password are the key and secret of a registered DFSP, for example, dfsp1:dfsp1

2.0.7.2 Response 200 OK

Field Type Description
Object Array List of supported Identifier Type objects
HTTP/1.1 200 OK
[
  {
    "identifierType": "test",
    "description": "test"
  }
]

2.0.8 Get directory metadata<a name=“get_directory_metadata”></a>

Returns metadata associated with the directory

http://central-directory
GET http://central-directory HTTP/1.1
2.0.8.0.1 Response 200 OK
Field Type Description
Metadata Object The Metadata object for the directory
HTTP/1.1 200 OK
{
  "directory": "http://central-directory-dev.us-west-2.elasticbeanstalk.com",
  "urls": {
    "health": "http://central-directory-dev.us-west-2.elasticbeanstalk.com/health",
    "identifier_types": "http://central-directory-dev.us-west-2.elasticbeanstalk.com/identifier-types",
    "resources": "http://central-directory-dev.us-west-2.elasticbeanstalk.com/resources"
  }
}

2.0.9 Error information<a name=“error_information”></a>

This section identifies the potential errors returned and the structure of the response.

An error object can have the following fields:

Name Type Description
id String An identifier for the type of error
message String A message describing the error that occurred
validationErrors Array Optional An array of validation errors
validationErrors[].message String A message describing the validation error
validationErrors[].params Object An object containing the field that caused the validation error
validationErrors[].params.key String The name of the field that caused the validation error
validationErrors[].params.value String The value that caused the validation error
validationErrors[].params.child String The name of the child field
HTTP/1.1 404 Not Found
Content-Type: application/json
{
  "id": "InvalidQueryParameterError",
  "message": "Error validating one or more query parameters",
  "validationErrors": [
    {
      "message": "'0' is not a registered identifierType",
      "params": {
        "key": "identifierType",
        "value": "0"
      }
    }
  ]
}

3 Central Ledger

3.1 Component Architecture

Central Ledger Block Diagram

3.2 Transfer/Fulfillment Flow

Transfer/Fulfillment sequence diagram

3.3 Settlement Flow

Settlement sequence diagram

3.4 Endpoints

Endpoints documentation

4 Central Ledger API

The central ledger is a system to record transfers between DFSPs, and to calculate net positions for DFSPs and issue settlement instructions.

4.1 Data Structures<a name=“data_structures”></a>

4.1.1 Transfer Object<a name=“transfer_object”></a>

A transfer represents money being moved between two DFSP accounts at the central ledger.

The transfer must specify an execution_condition, in which case it executes automatically when presented with the fulfillment for the condition. (Assuming the transfer has not expired or been canceled first.) Currently, the central ledger only supports the condition type of PREIMAGE-SHA-256 and a max fulfillment length of 65535.

Some fields are Read-only, meaning they are set by the API and cannot be modified by clients. A transfer object can have the following fields:

Name Type Description
id URI Resource identifier
ledger URI The ledger where the transfer will take place
debits Array Funds that go into the transfer
debits[].account URI Account holding the funds
debits[].amount String Amount as decimal
debits[].invoice URI Optional Unique invoice URI
debits[].memo Object Optional Additional information related to the debit
debits[].authorized Boolean Optional Indicates whether the debit has been authorized by the required account holder
debits[].rejected Boolean Optional Indicates whether debit has been rejected by account holder
debits[].rejection_message String Optional Reason the debit was rejected
credits Array Funds that come out of the transfer
credits[].account URI Account receiving the funds
credits[].amount String Amount as decimal
credits[].invoice URI Optional Unique invoice URI
credits[].memo Object Optional Additional information related to the credit
credits[].authorized Boolean Optional Indicates whether the credit has been authorized by the required account holder
credits[].rejected Boolean Optional Indicates whether credit has been rejected by account holder
credits[].rejection_message String Optional Reason the credit was rejected
execution_condition String The condition for executing the transfer
expires_at DateTime Time when the transfer expires. If the transfer has not executed by this time, the transfer is canceled.
state String Optional, Read-only The current state of the transfer (informational only)
timeline Object Optional, Read-only Timeline of the transfer’s state transitions
timeline.prepared_at DateTime Optional An informational field added by the ledger to indicate when the transfer was originally prepared
timeline.executed_at DateTime Optional An informational field added by the ledger to indicate when the transfer was originally executed

4.1.2 Account Object<a name=“account_object”></a>

An account represents a DFSP’s position at the central ledger.

Some fields are Read-only, meaning they are set by the API and cannot be modified by clients. An account object can have the following fields:

Name Type Description
id URI Read-only Resource identifier
name String Unique name of the account
balance String Optional, Read-only Balance as decimal
is_disabled Boolean Optional, Read-only Admin users may disable/enable an account
ledger URI Optional, Read-only A link to the account’s ledger
created DateTime Optional, Read-only Time when account was created

4.1.3 Notification Object<a name=“notification_object”></a>

The central ledger pushes a notification object to WebSocket clients when a transfer changes state. This notification is sent at most once for each state change.

A notification object can have the following fields:

Name Type Description
resource Object Transfer object that is the subject of the notification
related_resources Object Optional Additional resources relevant to the event
related_resources.execution_condition_fulfillment String Optional Proof of condition completion
related_resources.cancellation_condition_fulfillment String Optional Proof of condition completion

4.1.4 Metadata Object<a name=“metadata_object”></a>

The central ledger will return a metadata object about itself allowing client’s to configure themselves properly.

A metadata object can have the following fields:

Name Type Description
currency_code String Three-letter (ISO 4217) code of the currency this ledger tracks
currency_symbol String Currency symbol to use in user interfaces for the currency represented in this ledger. For example, “$”
ledger URI The ledger that generated the metadata
urls Object Paths to other methods exposed by this ledger. Each field name is short name for a method and the value is the path to that method.
precision Integer How many total decimal digits of precision this ledger uses to represent currency amounts
scale Integer How many digits after the decimal place this ledger supports in currency amounts

4.1.5 Position Object<a name=“position_object”></a>

The central ledger can report the current positions for all registered accounts.

A position object can have the following fields:

Name Type Description
account URI A link to the account for the calculated position
payments String Total non-settled amount the account has paid as string
receipts String Total non-settled amount the account has received as string
net String Net non-settled amount for the account as string

4.2 Endpoints<a name=“endpoints”></a>

4.2.1 Transfer Endpoints<a name=“transfer_endpoints”></a>

4.2.1.1 Prepare a transfer<a name=“prepare_transfer”></a>

This endpoint creates or updates a Transfer object.

http://central-ledger/transfers/:id
PUT http://central-ledger/transfers/3a2a1d9e-8640-4d2d-b06c-84f2cd613204 HTTP/1.1
Content-Type: application/json
{
  "id": "http://central-ledger/transfers/3a2a1d9e-8640-4d2d-b06c-84f2cd613204",
  "ledger": "http://central-ledger",
  "debits": [
    {
      "account": "http://central-ledger/accounts/dfsp1",
      "amount": "50"
    }
  ],
  "credits": [
    {
      "account": "http://central-ledger/accounts/dfsp2",
      "amount": "50"
    }
  ],
  "execution_condition": "cc:0:3:8ZdpKBDUV-KX_OnFZTsCWB_5mlCFI3DynX5f5H2dN-Y:2",
  "expires_at": "2015-06-16T00:00:01.000Z"
}
4.2.1.1.1 Headers
Field Type Description
Content-Type String Must be set to application/json
4.2.1.1.2 URL Params
Field Type Description
id String A new UUID to identify this transfer
4.2.1.1.3 Request body
Field Type Description
Object Transfer A Transfer object to describe the transfer that should take place. For a conditional transfer, this includes an execution_condition
4.2.1.1.4 Response 201 Created
Field Type Description
Object Transfer The newly-created Transfer object as saved
4.2.1.1.5 Response 200 OK
Field Type Description
Object Transfer The updated Transfer object as saved
HTTP/1.1 201 CREATED
Content-Type: application/json
{
  "id": "http://central-ledger/transfers/3a2a1d9e-8640-4d2d-b06c-84f2cd613204",
  "ledger": "http://usd-ledger.example/USD",
  "debits": [
    {
      "account": "http://central-ledger/accounts/dfsp1",
      "amount": "50"
    }
  ],
  "credits": [
    {
      "account": "http://central-ledger/accounts/dfsp2",
      "amount": "50"
    }
  ],
  "execution_condition": "cc:0:3:8ZdpKBDUV-KX_OnFZTsCWB_5mlCFI3DynX5f5H2dN-Y:2",
  "expires_at": "2015-06-16T00:00:01.000Z",
  "state": "proposed"
}
4.2.1.1.6 Errors (4xx)
Field Description
UnprocessableEntityError The provided entity is syntactically correct, but there is a generic semantic problem with it
UnsupportedCryptoTypeError The crypto type specified in the condition is not supported

4.2.1.2 Execute a prepared transfer<a name=“execute_transfer”></a>

Execute or cancel a transfer that has already been prepared. If the prepared transfer has an execution_condition, you can submit the fulfillment of that condition to execute the transfer. If the prepared transfer has a cancellation_condition, you can submit the fulfillment of that condition to cancel the transfer.

http://central-ledger/transfers/:id/fulfillment
PUT http://central-ledger/transfers/3a2a1d9e-8640-4d2d-b06c-84f2cd613204/fulfillment HTTP/1.1
Content-Type: text/plain
cf:0:_v8
4.2.1.2.1 Headers
Field Type Description
Content-Type String Must be set to text/plain
4.2.1.2.2 URL Params
Field Type Description
id String Transfer UUID
4.2.1.2.3 Request body
Field Type Description
Fulfillment String A fulfillment in string format
4.2.1.2.4 Response 200 OK
Field Type Description
Fulfillment String The fulfillment that was sent
HTTP/1.1 200 OK
cf:0:_v8
4.2.1.2.5 Errors (4xx)
Field Description
UnprocessableEntityError The provided entity is syntactically correct, but there is a generic semantic problem with it
NotFoundError The requested resource could not be found

4.2.1.3 Get a transfer object<a name=“get_transfer_by_id”></a>

This endpoint is used to query about the details or status of a local transfer.

http://central-ledger/transfers/:id
GET http://central-ledger/transfers/3a2a1d9e-8640-4d2d-b06c-84f2cd613204 HTTP/1.1
4.2.1.3.1 URL Params
Field Type Description
id String Transfer UUID
4.2.1.3.2 Response 200 OK
Field Type Description
Object Transfer The Transfer object as saved
HTTP/1.1 200 OK
{
  "id": "http://central-ledger/transfers/3a2a1d9e-8640-4d2d-b06c-84f2cd613204",
  "ledger": "http://usd-ledger.example/USD",
  "debits": [
    {
      "account": "http://usd-ledger.example/USD/accounts/alice",
      "amount": "50"
    }
  ],
  "credits": [
    {
      "account": "http://usd-ledger.example/USD/accounts/bob",
      "amount": "50"
    }
  ],
  "execution_condition": "cc:0:3:8ZdpKBDUV-KX_OnFZTsCWB_5mlCFI3DynX5f5H2dN-Y:2",
  "expires_at": "2015-06-16T00:00:01.000Z",
  "state": "executed",
  "timeline": {
    "proposed_at": "2015-06-16T00:00:00.000Z",
    "prepared_at": "2015-06-16T00:00:00.500Z",
    "executed_at": "2015-06-16T00:00:00.999Z"
  }
}
4.2.1.3.3 Errors (4xx)
Field Description
NotFoundError The requested resource could not be found

4.2.1.4 Get transfer fulfillment<a name=“get_transfer_fulfillment”></a>

This endpoint is used to retrieve the fulfillment for a transfer that has been executed or cancelled.

http://central-ledger/transfers/:id/fulfillment
GET http://central-ledger/transfers/3a2a1d9e-8640-4d2d-b06c-84f2cd613204/fulfillment HTTP/1.1
4.2.1.4.1 URL Params
Field Type Description
id String Transfer UUID
4.2.1.4.2 Response 200 OK
Field Type Description
Fulfillment String The fulfillment for the transfer
HTTP/1.1 200 OK
cf:0:_v8
4.2.1.4.3 Errors (4xx)
Field Description
NotFoundError The requested resource could not be found

4.2.1.5 Reject transfer<a name=“reject_transfer”></a>

Reject the transfer with the given message

http://central-ledger/transfers/:id/rejection
PUT http://central-ledger/transfers/3a2a1d9e-8640-4d2d-b06c-84f2cd613204/rejection HTTP/1.1
Content-Type: text/plain
error happened
4.2.1.5.1 URL Params
Field Type Description
id String Transfer UUID
4.2.1.5.2 Response 200 OK
Field Type Description
Rejection String An error message in string format
HTTP/1.1 200 OK
error happened
4.2.1.5.3 Errors (4xx)
Field Description
NotFoundError The requested resource could not be found

4.2.2 Account Endpoints<a name=“account_endpoints”></a>

4.2.2.1 Create account<a name=“create_account”></a>

Create an account at the ledger

http://central-ledger/accounts
POST http://central-ledger/accounts HTTP/1.1
Content-Type: application/json
{
  "name": "dfsp1"
}
4.2.2.1.1 Headers
Field Type Description
Content-Type String Must be set to application/json
4.2.2.1.2 Request body
Field Type Description
Object Account An Account object to create
4.2.2.1.3 Response 201 Created
Field Type Description
Object Account The newly-created Account object as saved
HTTP/1.1 201 CREATED
Content-Type: application/json
{
  "id": "http://central-ledger/accounts/dfsp1",
  "name": "dfsp1",
  "created": "2016-09-28T17:03:37.168Z",
  "balance": 1000000,
  "is_disabled": false,
  "ledger": "http://central-ledger"
}
4.2.2.1.4 Errors (4xx)
Field Description
RecordExistsError The account already exists (determined by name)

4.2.2.2 Get account by name<a name=“get_account_by_name”></a>

Get information about an account

http://central-ledger/accounts/:name
GET http://central-ledger/accounts/dfsp1 HTTP/1.1
4.2.2.2.1 URL Params
Field Type Description
name String The unique name for the account
4.2.2.2.2 Response 200 OK
Field Type Description
Object Account The Account object as saved
HTTP/1.1 200 OK
Content-Type: application/json
{
  "id": "http://central-ledger/accounts/dfsp1",
  "name": "dfsp1",
  "created": "2016-09-28T17:03:37.168Z",
  "balance": 1000000,
  "is_disabled": false,
  "ledger": "http://central-ledger"
}
4.2.2.2.3 Errors (4xx)
Field Description
NotFoundError The requested resource could not be found

4.2.3 Other Endpoints<a name=“other_endpoints”></a>

4.2.3.1 Get ledger metadata<a name=“get_ledger_metadata”></a>

Returns metadata associated with the ledger

http://central-ledger
GET http://central-ledger HTTP/1.1
4.2.3.1.1 Response 200 OK
Field Type Description
Metadata Object The Metadata object for the ledger
HTTP/1.1 200 OK
{
  "currency_code": null,
  "currency_symbol": null,
  "ledger": "http://central-ledger",
  "urls": {
    "health": "http://central-ledger/health",
    "positions": "http://central-ledger/positions",
    "account": "http://central-ledger/accounts/:name",
    "accounts": "http://central-ledger/accounts",
    "transfer": "http://central-ledger/transfers/:id",
    "transfer_fulfillment": "http://central-ledger/transfers/:id/fulfillment",
    "transfer_rejection": "http://central-ledger/transfers/:id/rejection",
    "account_transfers": "ws://central-ledger/accounts/:name/transfers"
  },
  "precision": 10,
  "scale": 2
}

4.2.3.2 Get net positions<a name=“get_net_positions”></a>

Get current net positions for all accounts at the ledger

http://central-ledger/positions
GET http://central-ledger/positions HTTP/1.1
4.2.3.2.1 Response 200 OK
Field Type Description
Positions Array List of current Position objects for the ledger
HTTP/1.1 200 OK
{
  "positions": [
    {
      "account": "http://central-ledger/accounts/dfsp1",
      "payments": "208461.06",
      "receipts": "0",
      "net": "-208461.06"
    },
    {
      "account": "http://central-ledger/accounts/dfsp2",
      "payments": "0",
      "receipts": "208461.06",
      "net": "208461.06"
    }
  ]
}

4.2.3.3 Settle fulfilled transfers<a name=“settle_fulfilled_transfers”></a>

Settle all currently fulfilled transfers in the ledger

http://central-ledger/webhooks/settle-transfers
POST http://central-ledger/webhooks/settle-transfers HTTP/1.1
4.2.3.3.1 Response 200 OK
Field Type Description
N/A Array List of transfer ids settled for the ledger
HTTP/1.1 200 OK
["3a2a1d9e-8640-4d2d-b06c-84f2cd613207", "7e10238b-4e39-49a4-93dc-c8f73afc1717"]

4.3 Error Information<a name=“error_information”></a>

This section identifies the potential errors returned and the structure of the response.

An error object can have the following fields:

Name Type Description
id String An identifier for the type of error
message String A message describing the error that occurred
validationErrors Array Optional An array of validation errors
validationErrors[].message String A message describing the validation error
validationErrors[].params Object An object containing the field that caused the validation error
validationErrors[].params.key String The name of the field that caused the validation error
validationErrors[].params.value String The value that caused the validation error
validationErrors[].params.child String The name of the child field
HTTP/1.1 404 Not Found
Content-Type: application/json
{
  "id": "InvalidUriParameterError",
  "message": "Error validating one or more uri parameters",
  "validationErrors": [
    {
      "message": "id must be a valid GUID",
      "params": {
        "value": "7d4f2a70-e0d6-42dc-9efb-6d23060ccd6",
        "key": "id"
      }
    }
  ]
}

5 Central Rules

5.1 Component Architecture

Central Rules Block Diagram

5.2 Check Transfer Eligibility Flow

Check transfer eligibility sequence diagram

5.3 Endpoints

Endpoint documentation

6 Central Rules Endpoints

The Central Rules API determines whether a transfer to an End User is permitted.

6.1 Endpoints

6.1.1 Check transfer eligibility

This endpoint is used to find out if a transfer to an End User is permitted.

6.1.1.1 Allowed

GET https://central-rules/transfer?sender_user_number=11144455555555&receiver_user_number=11122233333333&amount=125
HTTP/1.1 200 OK
Content-Type: application/json
{
  "allowed": true
}

6.1.1.2 Not allowed

GET https://central-rules/transfer?sender_user_number=11144455555555&receiver_user_number=11122233333333&amount=100001

``` http
HTTP/1.1 200 OK
Content-Type: application/json
{
“allowed”: false,
“reason”: {
“code”: “transfer_limit_exceeded”,
“message”: “Not allowed to send more than 100,000”
}
}

7 Overview

7.1 Contents

7.2 DFSP Microservices

DFSP functionality includes the following services:

  • dfsp-api - contains business the logic and exposes it as API
  • dfsp-directory - methods related to lookup services, like finding URLs, obtaining lists of districts, towns, participants, etc.
  • dfsp-identity - methods for managing identity related data, like sessions, images, PINs, etc.
  • dfsp-ledger - ledger service will keep account balances and transfers.
  • dfsp-interledger - service implementing the Interledger protocol
  • dfsp-notification - SMS, email and smart app notifications
  • dfsp-rule - fees, limits and other rules, like checking where a voucher can be used. AML functionality.
  • dfsp-subscription - methods related to managing the data associated to a subscription, but not related to accounts.
  • dfsp-transfer - methods that relate to movement of money between accounts
  • dfsp-account - methods that affect ledger accounts, like creating new ones or relations between account and other data like NFC, biometric, float, phone, signatories, etc.
  • dfsp-admin - web interface for DFSP.
  • dfsp-mock - mocking of external to DFSP services.

7.3 Component Diagram

microservices component diagram

7.4 Flow Diagrams

7.4.1 Push Transfer Sequence Diagram

Push transfer sequence diagram

7.4.2 Bulk Transfer Sequence Diagram

Bulk transfer sequence diagram

7.5 Default Ports

Each service has default ports in the development environment. Below you can find these defaults for each project.

project debug console httpserver port API
dfsp-account 30009 8009
dfsp-api 30010 8010 swagger
dfsp-directory 30011 8011 swagger
dfsp-identity 30012 8012
dfsp-interledger 30013 8013
dfsp-ledger 30014 8014 swagger
dfsp-notification 30015 8015
dfsp-rule 30016 8016 swagger
dfsp-subscription 30017 8017
dfsp-transfer 30018 8018 swagger
dfsp-ussd 30019 8019 swagger
dfsp-admin 30020 8020
dfsp-mock 8021

7.6 Development Environment Setup

See Development environment setup

8 Account service API


This service contains information about relations between users and their accounts. Accounts contain information for the following things:

  • Which account is primary for a given user
  • If particular user is signatory for a given account,
    account service can manage user roles and their permissions. Each registered user has assigned role in the system and this role has predefined permissions about allowed actions.

Roles can be one of the following:

  • Customer
  • Merchant
  • Agent

Permissions are as follow:

  • p2p - User is able to send peer to peer transfers
  • cashIn - User is able to cash in
  • cashOut - User is able to cash out
  • invoice - User is able to issue an invoice / Sell goods
  • ministatement - User is able to check mini-statement menu
  • balanceCheck - User is able to check his balance

For the current moment permissions are set to the roles as follow:

  • Agent: p2p, ministatement, balanceCheck, cashIn, cashOut
  • Customer: p2p, ministatement, balanceCheck
  • Merchant: p2p, ministatement, balanceCheck, invoice

Account service exposes the following private API calls:

8.0.1 Add actor to a given account

  • URL

/rpc/account/actorAccount/add

  • Method

POST

  • Data Params

Required

  • accountId [number] - Account id
  • accountNumber [string] - Account number
  • actorId [string] - Actor id
  • roleName [string] - Name of the role
  • isDefault [boolean] - Is this the primary user's account
  • isSignatory [boolean] - Is this actor is signatory for this account

  • Success Response

  • Code: 200 <br />
    Content
    • actorAccountId [number] - Actor account Id
    • actorId [string] - Actor Id
    • accountId [number] - Account Id
    • isDefault [boolean] - Is this the primary user's account
    • isSignatory [boolean] - Is this actor is signatory for this account
    • accountNumber [string] - Account number
    • permissions [string array] - Array with names of permissions

8.0.2 Edit actor data for account

  • URL

/rpc/account/actorAccount/edit

  • Method

POST

  • Data Params

Required

  • actorAccountId [number] - Actor account id
  • accountId [number] - Account id
  • actorId [string] - Actor id
  • isDefault [boolean] - Is this the primary user's account
  • isSignatory [boolean] - Is this actor is signatory for this account

  • Success Response

  • Code: 200 <br />
    Content
    • actorAccountId [number] - Actor account Id
    • actorId [string] - Actor Id
    • accountId [number] - Account Id
    • isDefault [boolean] - Is this the primary user's account
    • isSignatory [boolean] - Is this actor is signatory for this account
    • accountNumber [string] - Account number
    • permissions [string array] - Array with names of permissions

8.0.3 Fetch actor data for account

  • URL

/rpc/account/actorAccount/fetch

  • Method

POST

  • Data Params

Required

  • accountId [number] - Account id
  • actorId [string] - Actor id
  • accountNumber [string] - Account number
  • isDefault [boolean] - Is this the primary user's account
  • isSignatory [boolean] - Is this actor is signatory for this account

  • Success Response

  • Code: 200 <br />
    Content
    • actorAccountId [number] - Actor account Id
    • actorId [string] - Actor Id
    • accountId [number] - Account Id
    • isDefault [boolean] - Is this the primary user's account
    • isSignatory [boolean] - Is this actor is signatory for this account
    • accountNumber [string] - Account number
    • permissions [string array] - Array with names of permissions

8.0.4 Get actor data for account

  • URL

/rpc/account/actorAccount/get

  • Method

POST

  • Data Params

Required

  • actorAccountId [number] - Actor account id

  • Success Response

  • Code: 200 <br />
    Content
    • actorAccountId [number] - Actor account Id
    • actorId [string] - Actor Id
    • accountId [number] - Account Id
    • isDefault [boolean] - Is this the primary user's account
    • isSignatory [boolean] - Is this actor is signatory for this account
    • accountNumber [string] - Account number
    • permissions [string array] - Array with names of permissions

8.0.5 Remove actor data for account

  • URL

/rpc/account/actorAccount/get

  • Method

POST

  • Data Params

Required

  • actorAccountId [number] - Actor account id

  • Success Response

  • Code: 200 <br />
    Content
    • accountId [number] - Account id

8.0.6 Add permissions for account

  • URL

/rpc/account/actorAccountPermission/add

  • Method

POST

  • Data Params

Required

  • actorAccountId [number] - Actor account id
  • permissions [string array] - Array with the name of the permissions

  • Success Response

  • Code: 200 <br />
    Content
    • actorAccountId [number] - Actor account id
    • permissions [string array] - Array with the name of the permissions

8.0.7 Get permissions for account

  • URL

/rpc/account/actorAccountPermission/get

  • Method

POST

  • Data Params

Required

  • actorAccountId [number] - Actor account id

  • Success Response

  • Code: 200 <br />
    Content
    • actorAccountId [number] - Actor account id
    • permissions [string array] - Array with the name of the permissions

8.0.8 Remove permissions for account

  • URL

/rpc/account/actorAccountPermission/remove

  • Method

POST

  • Data Params

Required

  • actorAccountId [number] - Actor account id
  • permissions [string array] - Array with the name of the permissions

  • Success Response

  • Code: 200 <br />
    Content
    • actorAccountId [number] - Actor account Id
    • permissions [string array] - Array with names of permissions

8.0.9 Fetch account roles

  • URL

/rpc/account/role/fetch

  • Method

POST

  • Data Params

Required

NONE

  • Success Response

  • Code: 200 <br />
    Content
    • roleId [number] - Role Id
    • name [string] - Role name
    • description [string] - Role description

9 Setting up the Development Environment

9.1 Install development tools

9.1.1 Install Visual Studio Code editor

Download and install VS Code from Download Visual Studio Code.

9.1.2 Install Node.js platform

Download and install lastest stable version of Node.js from Node.js downloads.
Check the version by typing node -v in the console. It should be at least 4.5.0.

9.1.3 Update npm package manager

Node comes with npm installed. Update to the lastest version with the command npm install npm -g. Check the version of npm with the command npm -v. It should be higher than 3.10.

9.1.4 Install git

Install git as appropriate place for your operating system.

9.2 Clone the project

Generate and add SSH key to github. If you are not sure how to do that, you can follow the guides here Generate an SSH key
Navigate to the directory where the project should be cloned and type in the console for example:

git clone git@github.com:LevelOneProject/dfsp-directory.git

9.3 Configuration file for the database

Create configuration file named .ut_dfsp_directory_devrc in the home directory (C:/Users/[username]) that contains the individual access settings for the database. The content of the file should be the following:

[db.db]
database=dfsp-directory-<name>-<surname>
user=<name>.<surname>
password=<password>

[db.create]
user=<admin user>
password=<admin password>

The common settings can be found in the dev.json file in the server directory of the project.

9.4 Run npm install

In the project diectory (dfsp-directory) run run npm install.

9.5 Launch Configurations

Debugging in VS Code requires launch configuration file - launch.json. To create it click on the Configure gear icon on the Debug view top bar, choose debug environment and VS Code will generate a launch.json file under workspace’s .vscode directory.
Generated for Node.js debugging launch.json should look like the following:

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "directory",
            "type": "node",
            "request": "launch",
            "program": "${workspaceRoot}/index.js",
            "stopOnEntry": false,
            "args": [],
            "cwd": "${workspaceRoot}",
            "preLaunchTask": null,
            "runtimeExecutable": null,
            "runtimeArgs": [
                "--nolazy"
            ],
            "env": {
                "NODE_ENV": "development"
            },
            "externalConsole": false,
            "sourceMaps": false,
            "outDir": null
        }
    ]
}
  • name: name of configuration; appears in the launch configuration drop down menu
  • type: type of configuration; possible values: “node”, “mono”
  • program: wrkspace relative or absolute path to the program
  • stopOnEntry: automatically stop program after launch
  • args: command line arguments passed to the program
  • cwd: workspace relative or absolute path to the working directory of the program being debugged. Default is the current workspace
  • runtimeExecutable: workspace relative or absolute path to the runtime executable to be used. Default is the runtime executable on the PATH
  • runtimeArgs: optional arguments passed to the runtime executable
  • env: environment variables passed to the program
  • sourceMaps: use JavaScript source maps (if they exist)
  • outDir: if JavaScript source maps are enabled, the generated code is expected in this directory

9.6 Required Extensions for VS Code

In VS Code press ctrl + shift + p and type in the new open input field Install Extensions, or press ctrl + shift + x to go directly to the extensions.

9.6.1 beautify

Find beautify in the Extensions marketplace, install and enable it. This extension enables running js-beautify in VS Code. The generated .jsbeautifyrc loads code styling. It should has the following settings:

{
  "end_with_newline": true,
  "wrap_line_length": 160,
  "e4x": true,
  "jslint_happy": true,
  "indent_size": 2
}

Formatting code can be done with Shift + Alt + F.

9.6.2 CircleCl

Find CircleCl in the Extensions marketplace and install it. To enable it go to CircleCI and create an API token. Add it as circleci.apiKey in the Workspace Settings in VS Code (File -> Preferences -> Workspace Settings):

{
    "circleci.apiKey": [API token]
}

9.6.3 ESLint

This extension contributes the following variables to the Default settings of VS Code:

"eslint.enable": true,
"eslint.options": {}
  • eslint.enable: enabled by default
  • eslint.options: options to configure how eslint is started. They can be specified as valid for all projects in the User Settings (File -> Preferences -> User Settings) or only for a project in the Workspace Settings (File -> Preferences -> Workspace Settings) in which case the User Settings will be overwritten.

Each project includes the module ut-tools as developmen dependency. You need to point eslint config file to the eslint settings used.

9.6.3.1 Example

{
    "eslint.options": {
        "configFile": "/[path-to-project]/node_modules/ut-tools/eslint/l1p.eslintrc"
    }
}

10 Directory service API

The directory service is used for lookup services: For example, finding URLs, obtaining lists of districts, towns, participants, and so on.

  1. directory.item.fetch - returns item lists for things like currencies, countries, districts, towns, etc.
  2. directory.participant.fetch - returns list of DFSPs, merchants, NGOs and other types of participants
  3. directory.name.get - looks up name of end user, given the end user number
    • parameters
      • userNumber | accountNumber - recipient user or account Number
    • result
      • userURL | accountURL - recipient full user or account URLs, including the end point DFSP
      • currency - recipient account currency
    • errors
      • directory.userNotFound - recipient not found
      • directory.accountNotFound - recipient account not found

11 Identity Service API


Identity Service is used for managing identity related data, such as sessions, images, PINs, and so on. This service contains information about all the available actions and
the roles that can perform them.

Roles can be one of the following:

  • common - Default roles
  • maker - Batch payment maker role
  • checker - Batch payment checker role

Actions are defined as follow:

  • bulk.batch.add - Create new batch
  • bulk.batch.edit - Edit batch
  • bulk.batch.fetch - Fetch batches by criteria
  • bulk.batch.get - Get batch details
  • bulk.batch.reject - Reject batch
  • bulk.batch.disable - Disable batch
  • bulk.batch.pay - Pay batch
  • bulk.batch.check - Check batch
  • bulk.batch.ready - Mark batch as ready
  • bulk.batch.delete - Mark batch as deleted
  • bulk.batch.process - Process batch
  • bulk.payment.check - Check payment details
  • bulk.payment.disable - Disable payment
  • bulk.payment.edit - Edit payment
  • bulk.payment.fetch - Fetch payments
  • bulk.payment.add - Create payment
  • bulk.paymentStatus.fetch - Fetch list with payment statuses
  • bulk.batchStatus.fetch - Fetch list with batch statuse
  • core.transaltion.fetch - Translation fetch
  • rule.rule.fetch - Rule fetch
  • rule.item.fetch - Item fetch
  • rule.rule.add - Rule add
  • rule.rule.edit - Rule edit
  • ledger.account.fetch - Fetch accounts

Identity service has exposed the private API calls in the following sections:

11.0.1 Login action

  • URL

/login

  • Method

POST

  • Data Params

Required

  • actorId [string] - Actor id
  • username [string] - Username
  • password [string] - Login password
  • sessionId [string] - Generated session id

  • Success Response

  • Code: 200 <br />
    Content
    • identity.check [json] - json containing following fields
      • actorId [string] - Actor id
      • sessionId [string] - Session id
    • permission.get [json] - json containing following fields
      • actionId [string]- action id
      • objectId [string] - object id
      • description [string] - action description
    • language [json] - json with user language
    • localisation [json] - json with the following fields
      • dateFormat [string] - Date format
      • numberFormat [string] - Number format
    • roles [json] - json containing user roles
    • screenHeader [string] - Screen header

11.0.2 Identity add

  • URL

/rpc/identity/add

  • Method

POST

  • Data Params

Required

  • actorId [string] - Actor id
  • type [string] - Type
  • identifier [string] - User identifier
  • algorithm [string] - Used algorithm
  • params [string] - Input params
  • value [string] - Input value
  • roles [string array] - Array of role names

  • Success Response

  • Code: 200 <br />
    Content
    • actor [json] - json containing following fields
      • actorId [string] - Actor id

11.0.3 Identity close session

  • URL

/rpc/identity/closeSession

  • Method

POST

  • Data Params

Required

  • actorId [string] - Actor id
  • sessionId [string] - Generated session id

  • Success Response

  • Code: 200 <br />
    Content
    • data [json] - json containing empty array

11.0.4 Identity get

  • URL

/rpc/identity/get

  • Method

POST

  • Data Params

Required

  • username [string] - Username
  • actorId [string] - Actor id
  • type [string] - Type: password/ussd

  • Success Response

  • Code: 200 <br />
    Content
    • hashParams [json] - json containing following fields
      - params [string] - params
      - algorithm [string] - algorithm
      - actorId [string] - Actor id
      - type [string] - Type: password/ussd
    • roles [json] - json containing all assigned roles for this actorId

12 Notification Service API

Notification service is responsible for SMS, email and smart app notifications. The service chooses the appropriate channels and devices for the notification.

12.1 Notification.message.add

Adds message to notification queue
* parameters
* sourceUser - the unique identifier of the source user
* destinationUser - the unique identifier of the destination user
* sourceAccount
* destinationAccount
* sourceAmount
* destinationAmount
* sourceCurrency
* destinationCurrency
* message - the sender’s message
* dateTime - the date and time of the transfer
* transactionType - the type of the transaction (p2p, cash in, cash out, pending, vouchers, and so on)
* result
* notificationId - a unique identifier of the notification
* errors
* notification.invalid~ - invalid parameter

13 Rule service API

This service contains methods related to fees, limits and other rules.

13.1 rule.condition.check

Check for limits/fraud and return applicable tier (local) fee
* parameters
* transferType - type of transfer (push, pending, bulk, and so on)
* destinationURL - recipient URL
* sourceURL - sender URL
* sourceAmount |
* destinationAmount - the source or the destination amount of the transfer
* currency - the respective currency of the amount
* result
* destinationAmount |
* sourceAmount - the destination or source amount, including fees/rates
* currency - the amount currency
* errors
* rule.unknownDestination - receiver not found
* rule.unknownSource - sender not found
* rule.invalidAmount
* rule.invalidCurrency
* rule.fraudViolation
* rule.limitViolation

13.2 rule.push.execute

Instruct the rule service that a transfer will be executed
* parameters
* tranferType - type of transfer (push, pending, bulk, etc.)
* destinationURL - recipient URL
* sourceURL - sender URL
* sourceAmount - source amount
* destinationAmount - the destination amount of the transfer
* sourceCurrency - the source currency of the amount
* destinationCurrency - the destination currency of the amount
* transferId - a unique identifier of the transfer
* result

errors
* rule.unknownDestination - receiver not found
* rule.unknownSource - sender not found
* rule.invalidSourceAmount
* rule.invalidSourceCurrency
* rule.invalidDestinationAmount
* rule.invalidDestinationCurrency
* rule.invalidTransferId
* rule.duplicatedTransferId
* rule.fraudViolation
* rule.limitViolation

13.3 rule.push.reverse

Instruct the rule service that a transfer with a token reference is roll backed
* parameters
* TransferId - a unique identifier of the transfer
* result

errors
* rule.invalidTransferId
* rule.alreadyReversed

13.4 rule.voucher.check

Check voucher’s applicability

14 Subscription service API


This service is used for mapping between users and phone numbers. It supports the following private API calls:

14.0.1 Add subscription

  • URL

/rpc/subscription/subscription/add

  • Method

POST

  • Data Params

Required

  • actorId [string] - Actor id
  • phoneNumber [string] - Phone number

  • Success Response

  • Code: 200 <br />
    Content
    • subscriptionId [number] - Subscription id
    • actorId [string] - Actor id
    • phoneNumber [string] - Phone number

14.0.2 Get subscription

  • URL

/rpc/subscription/subscription/get

  • Method

POST

  • Data Params

Optional

  • actorId [string] - Actor id
  • phoneNumber [string] - Phone number

  • Success Response

  • Code: 200 <br />
    Content
    • actorId [string] - Actor id
    • phoneNumber [string] - Phone number

14.0.3 Remove subscription

  • URL

/rpc/subscription/subscription/remove

  • Method

POST

  • Data Params

Optional

  • subscriptionId [number] - Subscription id

  • Success Response

  • Code: 200 <br />
    Content
    • subscriptionId [number] - Subscription id

15 Transfer service API


This service contains information about transfers, invoices and invoice notifications. It is used to hold the following data:

  • Invoices when they are created by merchants
  • Invoice notifications when they are sent from merchant’s DFSP to the client’s DFSP
  • Invoice types
  • Invoice statuses
  • Invoice payments

Invoice types can be one of the following:

  • Standard - Standard invoices
  • Pending - Not assigned one-time invoice
  • Product - Not assigned multy-payer invoice
  • CashOut - Cash out invoices

Invoice statuses are:

  • executed - Invoice has been executed by customer
  • approved - Invoice has been approved by customer
  • pending - Invoice is pending
  • rejected - Invoice has been rejected by customer
  • cancelled - Invoice has been cancelled by merchant

Transfer service exposes the following private API calls in two spaces - [bulk] and [transfer]:

15.0.1 Add batch

  • URL

/rpc/bulk/batch/add

  • Method

POST

  • Data Params

Required

  • name [string] - Batch name
  • actorId [string] - Actor id
  • fileName [string] - File name
  • originalFileName [string] - Original file name

  • Success Response

  • Code: 200 <br />
    Content
    • batchId [number] - Batch id
    • name [string] - Batch name
    • batchStatusId [number] - Batch status id
    • actorId [string] - Actor id

15.0.2 Edit batch

  • URL

/rpc/bulk/batch/edit

  • Method

POST

  • Data Params

Required

  • actorId [string] - Actor id
  • batchId [number] - Batch Id

Optional

  • account [string] - Account
  • startDate [date] - Batch start date
  • expirationDate [date] - Batch expiration date
  • name [string] - Batch name
  • batchStatusId [number] - Batch status id
  • batchInfo [string] - Batch info
  • uploadInfo [string] - Upload info
  • fileName [string] - Batch file name
  • originalFileName [string] - Batch original file name
  • validatedAt [date] - Batch validation date

  • Success Response

  • Code: 200 <br />
    Content
    • batchId [number] - Batch Id
    • account [string] - Account
    • startDate [date] - Batch start date
    • expirationDate [date] - Batch expiration date
    • name [string] - Batch name
    • batchStatusId [number] - Batch status id
    • batchInfo [string] - Batch info
    • uploadInfo [string] - Upload info
    • actorId [string] - Actor id
    • fileName [string] - Batch file name
    • originalFileName [string] - Batch original file name
    • validatedAt [date] - Batch validation date

15.0.3 Fetch batch

  • URL

/rpc/bulk/batch/fetch

  • Method

POST

  • Data Params

Optional

  • actorId [string] - Actor id
  • name [string] - Batch name
  • batchStatusId [number] - Batch status id
  • fromDate [date] - From date
  • toDate [date] - To date

** Note: **'fromDate' and 'toDate' are related to the creation date of the batch. They are not related to the 'startDate' and 'expirationDate' of the batch.

  • Success Response

  • Code: 200 <br />
    Content
    • batchId [number] - Batch Id
    • account [string] - Account
    • startDate [date] - Batch start date
    • expirationDate [date] - Batch expiration date
    • name [string] - Batch name
    • batchStatusId [number] - Batch status id
    • batchInfo [string] - Batch info
    • uploadInfo [string] - Upload info
    • actorId [string] - Actor id
    • fileName [string] - Batch file name
    • originalFileName [string] - Batch original file name
    • validatedAt [date] - Batch validation date

15.0.4 Get batch

  • URL

/rpc/bulk/batch/get

  • Method

POST

  • Data Params

Required

  • batchId [number] - Batch id

  • Success Response

  • Code: 200 <br />
    Content
    • batchId [number] - Batch Id
    • name [string] - Batch name
    • account [string] - Account
    • startDate [date] - Batch start date
    • expirationDate [date] - Batch expiration date
    • batchStatusId [number] - Batch status id
    • actorId [string] - Actor id
    • info [string] - Batch info
    • fileName [string] - Batch file name
    • originalFileName [string] - Batch original file name
    • createdAt [date] - Batch create date
    • status [string] - Batch status
    • updateAd [date] - Batch update date
    • paymentsCount [number] - Batch payments count

15.0.5 Process batch

  • URL

/rpc/bulk/batch/process

  • Method

POST

  • Data Params

Required

  • batchId [number] - Batch id
  • actorId [string] - Actor id
  • startDate [date] - Batch start date
  • expirationDate [date] - Batch expiration date
  • account [string] - Account

  • Success Response

  • Code: 200 <br />
    Content
    • queued [number] - Count of the payments added in the queue

15.0.6 Batch ready

  • URL

/rpc/bulk/batch/ready

  • Method

POST

  • Data Params

Required

  • batchId [number] - Batch id
  • actorId [string] - Actor id

  • Success Response

  • Code: 200 <br />
    Content
    • batchId [number] - Batch Id
    • account [string] - Account
    • startDate [date] - Batch start date
    • expirationDate [date] - Batch expiration date
    • name [string] - Batch name
    • batchStatusId [number] - Batch status id
    • batchInfo [string] - Batch info
    • uploadInfo [string] - Upload info
    • actorId [string] - Actor id
    • fileName [string] - Batch file name
    • originalFileName [string] - Batch original file name
    • validatedAt [date] - Batch validation date

15.0.7 Batch revert status

  • URL

/rpc/bulk/batch/revertStatus

  • Method

POST

  • Data Params

Required

  • batchId [number] - Batch id
  • actorId [string] - Actor id
  • partial [boolean] - Is it one payment checked or whole batch

  • Success Response

  • Code: 200 <br />
    Content
    • batchId [number] - Batch Id
    • account [string] - Account
    • startDate [date] - Batch start date
    • expirationDate [date] - Batch expiration date
    • name [string] - Batch name
    • batchStatusId [number] - Batch status id
    • batchInfo [string] - Batch info
    • uploadInfo [string] - Upload info
    • actorId [string] - Actor id
    • fileName [string] - Batch file name
    • originalFileName [string] - Batch original file name
    • validatedAt [date] - Batch validation date

15.0.8 Fetch batch status

  • URL

/rpc/bulk/batchStatus/fetch

  • Method

POST

  • Data Params

Required
NONE

  • Success Response

  • Code: 200 <br />
    Content
    • key [number] - Status key
    • name [string] - Status name
    • description [string] - Status description

15.0.9 Add payments

  • URL

/rpc/bulk/payment/add

  • Method

POST

  • Data Params

Required

  • actorId [string] - Actor id
  • payments [json] - json containing list with payments
  • batchId [number] - Batch id

payments should have the following fields included:

  • sequenceNumber [number] - Sequence number
  • identifier [string] - User's identifier
  • firstName [string] - User's first name
  • lastName [string] - User's last name
  • dob [date] - Date of birth
  • nationalId [string] - National Id
  • amount [number] - Amount

  • Success Response

  • Code: 200 <br />
    Content
    • insertedRows [number] - Count of inserted payments

15.0.10 Edit payments

  • URL

/rpc/bulk/payment/add

  • Method

POST

  • Data Params

Required

  • actorId [string] - Actor id
  • payments [json] - json containing list with payments

payments should have the following fields included:

  • paymentId [number] - Payment id
  • batchId [number] - Batch id
  • sequenceNumber [number] - Sequence number
  • identifier [string] - User's identifier
  • firstName [string] - User's first name
  • lastName [string] - User's last name
  • dob [date] - Date of birth
  • nationalId [string] - National Id
  • amount [number] - Amount
  • info [string] - Payment info
  • payee [json] - Payee info

  • Success Response

  • Code: 200 <br />
    Content
    • payments [json] - json with the edited payments

15.0.11 Fetch payments

  • URL

/rpc/bulk/payment/fetch

  • Method

POST

  • Data Params

Optional

  • paymentId [number array] - Array with payment ids
  • batchId [number] - Batch id
  • nationalId [string] - Batch id
  • paymentStatusId [number array] - Array with payment status ids
  • fromDate [date] - From date
  • toDate [date] - To date
  • sequenceNumber [number] - Sequence number
  • name [string] - Batch name
  • pageSize [number] - Page size
  • pageNumber [number] - Page number

  • Success Response

  • Code: 200 <br />
    Content
    • data [json] - Result set from the search
    • pagination [json] - json with the following fields included
      • 'pageNumber' - Requested page number
      • 'pageSize' - Returned payments from the result set for this page
      • 'pagesTotal' - Returned count of pages from the result set
      • 'recordsTotal' - Total count of payments matched from search

15.0.12 Get payment

  • URL

/rpc/bulk/payment/get

  • Method

POST

  • Data Params

Required

  • paymentId [number] - Array with payment ids

  • Success Response

  • Code: 200 <br />
    Content
    • paymentId [number] - Payment id
    • batchId [number] - Batch id
    • sequenceNumber [number] - Sequence number
    • identifier [string] - User's identifier
    • firstName [string] - User's first name
    • lastName [string] - User's last name
    • dob [date] - User's date of birth
    • nationalId [string] - User's national id
    • amount [number] - Transfer amount
    • paymentStatusId [number] - Payment status id
    • info [string] - Payment info
    • payee [json] - Payee data
    • name [string] - Batch name
    • createdAt [date] - Payment's created at date
    • updatedAt [date] - Payment's updated at date
    • account [string] - Batch account
    • startDate [date] - Batch's start date
    • expirationDate [date] - Batch's expiration date
    • actorId [string] - Actor id

15.0.13 Get payments for processing

  • URL

/rpc/bulk/payment/getForProcessing

  • Method

POST

  • Data Params

Required

  • count [number] - Number of payments to be returned. Default is set to 100

  • Success Response

  • Code: 200 <br />
    Content
    • paymentId [number] - Payment id
    • batchId [number] - Batch id
    • sequenceNumber [number] - Sequence number
    • identifier [string] - User's identifier
    • firstName [string] - User's first name
    • lastName [string] - User's last name
    • dob [date] - User's date of birth
    • nationalId [string] - User's national id
    • amount [number] - Transfer amount
    • paymentStatusId [number] - Payment status id
    • info [string] - Payment info
    • createdAt [date] - Payment's created at date
    • updatedAt [date] - Payment's updated at date

15.0.14 Pre-process payment

  • URL

/rpc/bulk/payment/preProcess

  • Method

POST

  • Data Params

Required

  • paymentId [number] - Payment id

  • Success Response

  • Code: 200 <br />
    Content
    • paymentId [number] - Payment id
    • batchId [number] - Batch id
    • sequenceNumber [number] - Sequence number
    • identifier [string] - User's identifier
    • firstName [string] - User's first name
    • lastName [string] - User's last name
    • dob [date] - User's date of birth
    • nationalId [string] - User's national id
    • amount [number] - Transfer amount
    • paymentStatusId [number] - Payment status id
    • info [string] - Payment info
    • payee [json] - Payee data
    • name [string] - Batch name
    • createdAt [date] - Payment's created at date
    • updatedAt [date] - Payment's updated at date
    • account [string] - Batch account
    • startDate [date] - Batch's start date
    • expirationDate [date] - Batch's expiration date
    • actorId [string] - Actor id

15.0.15 Process payment

  • URL

/rpc/bulk/payment/process

  • Method

POST

  • Data Params

Required

  • paymentId [number] - Payment id
  • actorId [string] - Actor id
  • error [string] - Error message

  • Success Response

  • Code: 200 <br />
    Content
    • paymentId [number] - Payment id
    • batchId [number] - Batch id
    • sequenceNumber [number] - Sequence number
    • identifier [string] - User's identifier
    • firstName [string] - User's first name
    • lastName [string] - User's last name
    • dob [date] - User's date of birth
    • nationalId [string] - User's national id
    • amount [number] - Transfer amount
    • paymentStatusId [number] - Payment status id
    • info [string] - Payment info
    • payee [json] - Payee data
    • name [string] - Batch name
    • createdAt [date] - Payment's created at date
    • updatedAt [date] - Payment's updated at date
    • account [string] - Batch account
    • startDate [date] - Batch's start date
    • expirationDate [date] - Batch's expiration date
    • actorId [string] - Actor id

15.0.16 Fetch payment statuses

  • URL

/rpc/bulk/paymentStatus/fetch

  • Method

POST

  • Data Params

Required

NONE
  • Success Response

  • Code: 200 <br />
    Content
    • key [number] - Payment status key
    • name [string] - Payment status name
    • description [string] - Payment status description

15.0.17 Add invoice notification

  • URL

/rpc/transfer/invoiceNotification/add

  • Method

POST

  • Data Params

Required

  • invoiceUrl [string] - Invoice URL
  • identifier [string] - Identifier
  • memo [string] - Invoice memo

  • Success Response

  • Code: 200 <br />
    Content
    • invoiceNotificationId [number] - Invoice notification id
    • invoiceUrl [string] - Invoice URL
    • identifier [string] - Identifier
    • status [string] - Invoice status
    • memo [string] - Invoice memo

15.0.18 Cancel invoice notification

  • URL

/rpc/transfer/invoiceNotification/cancel

  • Method

POST

  • Data Params

Required

  • invoiceUrl [string] - Invoice URL

  • Success Response

  • Code: 200 <br />
    Content
    • invoiceNotificationId [number] - Invoice notification id
    • invoiceUrl [string] - Invoice URL
    • identifier [string] - Identifier
    • status [string] - Invoice status
    • memo [string] - Invoice memo

15.0.19 Edit invoice notification

  • URL

/rpc/transfer/invoiceNotification/edit

  • Method

POST

  • Data Params

Required

  • invoiceNotificationId [number] - Invoice notification id
  • invoiceNotificationStatusId [number] - Invoice notification status id

  • Success Response

  • Code: 200 <br />
    Content
    • invoiceNotificationId [number] - Invoice notification id
    • invoiceUrl [string] - Invoice URL
    • identifier [string] - Identifier
    • status [string] - Invoice status
    • memo [string] - Invoice memo

15.0.20 Execute invoice notification

  • URL

/rpc/transfer/invoiceNotification/execute

  • Method

POST

  • Data Params

Required

  • invoiceNotificationId [number] - Invoice notification id

  • Success Response

  • Code: 200 <br />
    Content
    • invoiceNotificationId [number] - Invoice notification id
    • invoiceUrl [string] - Invoice URL
    • identifier [string] - Identifier
    • status [string] - Invoice status
    • memo [string] - Invoice memo

15.0.21 Fetch invoice notification

  • URL

/rpc/transfer/invoiceNotification/fetch

  • Method

POST

  • Data Params

Required

  • identifier [string] - Identifier
  • status [string] - Invoice notification id

  • Success Response

  • Code: 200 <br />
    Content
    • invoiceNotificationId [number] - Invoice notification id
    • invoiceUrl [string] - Invoice URL
    • identifier [string] - Identifier
    • status [string] - Invoice status
    • memo [string] - Invoice memo

15.0.22 Get invoice notification

  • URL

/rpc/transfer/invoiceNotification/get

  • Method

POST

  • Data Params

Required

  • invoiceNotificationId [number] - Invoice notification id

  • Success Response

  • Code: 200 <br />
    Content
    • invoiceNotificationId [number] - Invoice notification id
    • invoiceUrl [string] - Invoice URL
    • identifier [string] - Identifier
    • status [string] - Invoice status
    • memo [string] - Invoice memo

15.0.23 Reject invoice notification

  • URL

/rpc/transfer/invoiceNotification/reject

  • Method

POST

  • Data Params

Required

  • invoiceNotificationId [number] - Invoice notification id

  • Success Response

  • Code: 200 <br />
    Content
    • invoiceNotificationId [number] - Invoice notification id
    • invoiceUrl [string] - Invoice URL
    • identifier [string] - Identifier
    • status [string] - Invoice status
    • memo [string] - Invoice memo

15.0.24 Invoice add

  • URL

/rpc/transfer/invoice/add

  • Method

POST

  • Data Params

Required

  • account [string] - Account
  • name [string] - Name
  • currencyCode [string] - Currency code
  • amount [number] - Amount
  • merchantIdentifier [string] - Merchant identifier
  • identifier [string] - Client identifier
  • invoiceType [string] - Invoice type
  • invoiceInfo [string] - Invoice info

  • Success Response

  • Code: 200 <br />
    Content
    • type [string] - Invoice type
    • invoiceId [number] - Invoice id
    • account [string] - Account
    • name [string] - Name
    • currencyCode [string] - Currency code
    • currencySymbol [string] - Currency symbol
    • amount [number] - Amount
    • status [string] - Invoice status
    • invoiceType [string] - Invoice type
    • merchantIdentifier [string] - Merchant identifier
    • invoiceInfo [string] - Invoice info

15.0.25 Invoice cancel

  • URL

/rpc/transfer/invoice/cancel

  • Method

POST

  • Data Params

Required

  • invoiceId [number] - Invoice id

  • Success Response

  • Code: 200 <br />
    Content
    • type [string] - Invoice type
    • invoiceId [number] - Invoice id
    • account [string] - Account
    • name [string] - Name
    • currencyCode [string] - Currency code
    • currencySymbol [string] - Currency symbol
    • amount [number] - Amount
    • status [string] - Invoice status
    • invoiceType [string] - Invoice type
    • merchantIdentifier [string] - Merchant identifier
    • invoiceInfo [string] - Invoice info

15.0.26 Invoice edit

  • URL

/rpc/transfer/invoice/edit

  • Method

POST

  • Data Params

Required

  • invoiceId [number] - Invoice id
  • invoiceStatusId [number] - Invoice status id

  • Success Response

  • Code: 200 <br />
    Content
    • type [string] - Invoice type
    • invoiceId [number] - Invoice id
    • account [string] - Account
    • name [string] - Name
    • currencyCode [string] - Currency code
    • currencySymbol [string] - Currency symbol
    • amount [number] - Amount
    • status [string] - Invoice status
    • invoiceType [string] - Invoice type
    • merchantIdentifier [string] - Merchant identifier
    • invoiceInfo [string] - Invoice info

15.0.27 Invoice execute

  • URL

/rpc/transfer/invoice/edit

  • Method

POST

  • Data Params

Required

  • invoiceId [number] - Invoice id
  • identifier [string] - Identifier

  • Success Response

  • Code: 200 <br />
    Content
    • type [string] - Invoice type
    • invoiceId [number] - Invoice id
    • account [string] - Account
    • name [string] - Name
    • currencyCode [string] - Currency code
    • currencySymbol [string] - Currency symbol
    • amount [number] - Amount
    • status [string] - Invoice status
    • invoiceType [string] - Invoice type
    • merchantIdentifier [string] - Merchant identifier
    • invoiceInfo [string] - Invoice info

15.0.28 Invoice fetch

  • URL

/rpc/transfer/invoice/fetch

  • Method

POST

  • Data Params

Optional

  • merchantIdentifier [string] - Merchant identifier
  • account [string] - Account
  • status [string array] - Array with invoice statuses
  • invoiceType [string array] - Array with invoice types

  • Success Response

  • Code: 200 <br />
    Content
    • type [string] - Invoice type
    • invoiceId [number] - Invoice id
    • account [string] - Account
    • name [string] - Name
    • currencyCode [string] - Currency code
    • currencySymbol [string] - Currency symbol
    • amount [number] - Amount
    • status [string] - Invoice status
    • invoiceType [string] - Invoice type
    • merchantIdentifier [string] - Merchant identifier
    • invoiceInfo [string] - Invoice info

15.0.29 Invoice get

  • URL

/rpc/transfer/invoice/get

  • Method

POST

  • Data Params

Optional

  • invoiceId [number] - Invoice id

  • Success Response

  • Code: 200 <br />
    Content
    • type [string] - Invoice type
    • invoiceId [number] - Invoice id
    • account [string] - Account
    • name [string] - Name
    • currencyCode [string] - Currency code
    • currencySymbol [string] - Currency symbol
    • amount [number] - Amount
    • status [string] - Invoice status
    • invoiceType [string] - Invoice type
    • merchantIdentifier [string] - Merchant identifier
    • invoiceInfo [string] - Invoice info

15.0.30 Invoice reject

  • URL

/rpc/transfer/invoice/reject

  • Method

POST

  • Data Params

Optional

  • invoiceId [number] - Invoice id

  • Success Response

  • Code: 200 <br />
    Content
    • type [string] - Invoice type
    • invoiceId [number] - Invoice id
    • account [string] - Account
    • name [string] - Name
    • currencyCode [string] - Currency code
    • currencySymbol [string] - Currency symbol
    • amount [number] - Amount
    • status [string] - Invoice status
    • invoiceType [string] - Invoice type
    • merchantIdentifier [string] - Merchant identifier
    • invoiceInfo [string] - Invoice info

15.0.31 Invoice payer add

  • URL

/rpc/transfer/invoicePayer/add

  • Method

POST

  • Data Params

Optional

  • invoiceId [number] - Invoice id
  • identifier [string] - Identifier

  • Success Response

  • Code: 200 <br />
    Content
    • invoicePayerId [number] - Invoice payer id
    • invoiceId [number] - Invoice id
    • identifier [string] - Identifier
    • createdAt [date] - Created at date

15.0.32 Invoice payer fetch

  • URL

/rpc/transfer/invoicePayer/fetch

  • Method

POST

  • Data Params

Optional

  • invoiceId [number] - Invoice id
  • paid [boolean] - Paid

  • Success Response

  • Code: 200 <br />
    Content
    • invoicePayerId [number] - Invoice payer id
    • invoiceId [number] - Invoice id
    • identifier [string] - Identifier
    • createdAt [date] - Created at date

15.0.33 Invoice payer get

  • URL

/rpc/transfer/invoicePayer/get

  • Method

POST

  • Data Params

Optional

  • invoicePayerId [number] - Invoice payer id

  • Success Response

  • Code: 200 <br />
    Content
    • invoicePayerId [number] - Invoice payer id
    • invoiceId [number] - Invoice id
    • identifier [string] - Identifier
    • createdAt [date] - Created at date

15.0.34 Transfer push execute

  • URL

/rpc/transfer/push/execute

  • Method

POST

  • Data Params

Optional

  • sourceAccount [string] - Source account
  • receiver [string] - Receiver
  • destinationAmount [number] - Destination amount
  • currency [string] - Currency code
  • fee [number] - Fee amount
  • memo [string] - Transaction memo

  • Success Response

  • Code: 200 <br />
    Content
    • id [string] - Payment id
    • address [string] - Address
    • destinationAmount [number] - Destination amount
    • sourceAmount [number] - Source amount
    • sourceAccount [string] - Source account
    • expiresAt [date] - Expiration date
    • condition [string] - Condition
    • fulfillment [string] - Fulfillment
    • status [string] - Status

16 Logging Setup

Steps to perform ELK 5.X Stack Installation in AWS EC2 Instance RHEL

17 ## JDK 8 Installation

If not already installed JDK 8 must be installed to continue with the
ELK 5.X Setup.

17.0.1 Install wget to download JDK 8 rpm

# yum -y install wget

17.0.2 Download JDK 8 rpm

wget –no-cookies –no-check-certificate –header “Cookie:
gpw_e24=http%3A%2F%2Fwww.oracle.com%2F;
oraclelicense=accept-securebackup-cookie”
http://download.oracle.com/otn-pub/java/jdk/8u121-b13/e9e7ea248e2c4826b92b3f075a80e441/jdk-8u121-linux-x64.rpm

17.0.3 Check JDK 8 rpm sha256 sum

# sha256sum jdk-8u121-linux-x64.rpm

17.0.4 Compare JDK 8 rpm sha256 sum against

https://www.oracle.com/webfolder/s/digest/8u121checksum.html

17.0.5 Install JDK 8 rpm

# rpm -ivh jdk-8u121-linux-x64.rpm

17.0.6 Set Java default

java -version

If not 1.8.0_121, make it your default java using the alternatives
command:

sudo alternatives -config java

Enter the selection number to choose which java executable should be
used by default.

17.1 ## Elasticsearch Installation

17.1.1 Import Elasticsearch PGP Key

# rpm –import https://artifacts.elastic.co/GPG-KEY-elasticsearch

17.2 Create file with elasticsearch repository information

# vi /etc/yum.repos.d/elasticsearch.repo

Add following contents:


elasticsearch − 5.x

name=Elasticsearch repository for 5.x packages

baseurl=https://artifacts.elastic.co/packages/5.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md

17.2.1 Install elasticsearch

# yum -y install elasticsearch

17.2.2 Configure elasticsearch

# sudo vim /etc/elasticsearch/elasticsearch.yml

Go to Network section and modify the network.host:

network.host:
_eth0_,_local_

17.2.3 Start/Stop/Restart elasticsearch

sudo service elasticsearch start

sudo service elasticsearch stop

sudo service elasticsearch restart

17.2.4 Make elasticsearch more verbose by removing the “– quite \” flag

# vi /usr/lib/system/system/elasticsearch.service

17.2.5 Restart elasticsearch service and perform daemon reload

sudo service elasticsearch restart

17.2.6 Check that elasticsearch is running

Install netcat if not already install for debugging purposes:

# yum -y install nmap-ncat

#ncat -v localhost 9200

Ncat: Version 6.40 ( http://nmap.org/ncat )

Ncat: Connected to ::1:9200.

GET /

HTTP/1.0 404 Not Found

es.index_uuid: _na_

es.resource.type: index_or_alias

es.resource.id: bad-request

es.index: bad-request

content-type: application/json; charset=UTF-8

content-length: 367

{“error”:{“root_cause”:
$${"type":"index\_not\_found\_exception","reason":"no such index","resource.type":"index\_or\_alias","resource.id":"bad-request","index\_uuid":"\_na\_","index":"bad-request"}$$
,“type”:“index_not_found_exception”,“reason”:“no
such
index”,“resource.type”:“index_or_alias”,“resource.id”:“bad-request”,“index_uuid”:“_na_”,“index”:“bad-request”},“status”:404}

If localhost is not the hostname, specify correct hostname or use
server ip here. Also, must type “GET \.

Can also use curl command to check that elasticsearch is running:

# curl -X GET http://localhost:9200/

17.3 Kibana Installation

17.3.1 Install kibana

# yum -y install kibana

17.3.2 Configure Kibana

# vi /etc/kibana/kibana.yml

Uncomment and make sure to set the following 4 entries:

server.port: 5601

server.host: " localhost"

server.name: “localhost”

elasticsearch.url: http://localhost:9200

NOTE: localhost needs to be replaced by the actual hostname or server
ip depending on ELK stack configuration, currently entire ELK stack is
running on the same server.

17.3.3 Start/Stop/Restart Kibana

sudo service kibana start

sudo service kibana stop

sudo service kibana restart

17.3.4 Verify that you Kibana can be accessed from the browser

http://localhost:5601/app/kibana

NOTE: localhost needs to be replaced by the actual hostname or server
ip. If no UI is available, go to NGINX reverse proxy section to access
Kibana.

17.3.5 Verify Kibana status from the browser

http://localhost:5601/status

NOTE: If no UI installed, continue to NGINX reverse proxy installation
section to access Kibana from the browser.

17.4 Logstash Installation

NOTE: Currently not installed as Beat log shippers (Filebeat and
Metricsbeat) are directly sending logs to Elasticsearch. Logstash can be
used to perform processing of logs. For more information, look at
Additional Considerations section.

17.4.1 Install Logstash

# yum -y install logstash

17.4.2 Start/Stop/Restart Logstash

sudo service kibana start

sudo service kibana stop

sudo service kibana restart

Example Logstash Configuration to read system logs
(/var/logs/*log)

input {

file {

type => “syslog”

path =>
$$ "/var/log/messages", "/var/log/\*.log" $$

}

}

output {

stdout {

codec => rubydebug

}

elasticsearch {

host => “localhost” # Use the internal IP of your Elasticsearch
server

# for production

}

}

17.4.3 BEST PRACTICES

-Separate large Logstash configuration files into several smaller
ones. Conf file path can be set to a directory. Files in directory will
be merged by name, therefore name logstash configuration files in
alphabetical order.

-Configure Filebeat to feed Logstash and Logstash to feed Elasticsearch.

17.5 Filebeat Installation

17.5.1 Install Filebeat

# yum -y install filebeat

17.5.2 Configure Filebeat

vi /etc/filebeat/filebeat.yml

Configure Filebeat to ship files to Elasticsearch:

Under Elasticsearch output section modify

hosts:
$$"http://172.31.45.32:9200"$$

NOTE: Use the eth0 ip where Elasticsearch is running.

Configure path for Filebeat to crawl and fetch logs from:

Under Filebeat prospectors section, identify paths and for example add

  • /var/log/mule_logs/mule_dfsp1/*.log

17.6 Start/Stop/Restart Filebeat

sudo service filebeat start

sudo service filebeat stop

sudo service filebeat restart

17.6.1 Import dashboars and index

/usr/share/filebeat/scripts/import_dashboards

17.6.2 Logstash vs Beats

Beats are lightweight data shippers that you install as agents on your
servers to send specific types of operational data to Elasticsearch.
Beats have a small footprint and use fewer system resources than
Logstash.

Logstash has a larger footprint, but provides a broad array of input,
filter, and output plugins for collecting, enriching, and transforming
data from a variety of sources.

Beats are lightweight data shippers that you install as agents on your
servers to send specific types of operational data to Elasticsearch.
Beats have a small footprint and use fewer system resources than
Logstash.

Logstash has a larger footprint, but provides a broad array of input,
filter, and output plugins for collecting, enriching, and transforming
data from a variety of sources.

Among Logstash filters that can be leveraged at L1P are; anonymize,
json.

17.7 Metricbeat Installation

17.7.1 Install Metricbeat

# yum -y install metricbeat

17.7.2 Configure Metricbeat

vi /etc/metricbeat/metricbeat.yml

Configure Metricbeat to ship files to Elasticsearch:

Under Elasticsearch output section modify

hosts:
$$"http://172.31.45.32:9200"$$

NOTE: Use the eth0 ip where Elasticsearch is running.

17.7.3 Start/Stop/Restart Metricbeat

sudo service metricbeat start

sudo service metricbeat stop

sudo service metricbeat restart

17.7.4 Import dashboars and index

/usr/share/metricbeat/scripts/import_dashboards

18 ## NGINX Reverse Proxy Installation

18.0.1 Install NGINX

yum -y install nginx httpd-tools

18.0.2 Create password file for basic authentication of http users

htpasswd -c /etc/nginx/conf.d/kibana.htpasswd admin

18.0.3 Configure NGINX

vi /etc/nginx/conf.d/kibana.conf

server {

listen 80;

server_name localhost;

auth_basic “Restricted Access”;

auth_basic_user_file /etc/nginx/htpasswd.users;

location / {

proxy_pass http://localhost:5601;

proxy_http_version 1.1;

proxy_set_header Upgrade \$http_upgrade;

proxy_set_header Connection ‘upgrade’;

proxy_set_header Host \$host;

proxy_cache_bypass \$http_upgrade;

}

}

18.0.4 Restart NGINX

sudo service kibana restart

18.0.5 Access Kibana via NGINX on your browser

http://EC2_INSTANCE_URL

Enter username/password admin/adminpassword

18.1 Modify AWS EC2 Instance Security Group to open ports

18.1.1 Create Two Inbound rules

  1. tcp for port 80 for NGINX

  2. tcp for port 9200 for Elasticsearch, can ben only opened for
    Beats/Logstash servers

18.2 Kibana Query

https://www.elastic.co/guide/en/kibana/current/search.html

https://www.mjt.me.uk/posts/kibana-101/

https://www.timroes.de/2016/05/29/elasticsearch-kibana-queries-in-depth-tutorial/

http://logz.io/blog/kibana-tutorial/

To perform Kibana Queries log into kibana. Make sure to set proper Time Range at
Kibana->Discover on top right hand corner. Simply enter query and
search.

18.3 Custom L1P_Index Configuration

The custom L1P_Index is defined by the two files show below (ilp_template.json and types.json) which are part of the interop-elk GitHub repo. The L1P_Index is an elasticsearch index used as the storage data structure for the L1P specific log data. The main purpose is to capture and display L1P transaction timestamp across L1P components.

ilp_template.json:

{

  “template”: [

    “l1p_index*"

  ],

  “mappings”: {

    “l1p_log”: {

      “_all“: {

        “norms”: false

      },

      “dynamic_templates”: [

        {

          “strings_as_keyword”: {

            “match_mapping_type”: “string”,

            “mapping”: {

              “ignore_above”: 1024,

              “type”: “keyword”

            }

          }

        }

      ],

      “properties”: {

        : {

          “type”: “date”

        },

        “l1p_trace_id”: {

          “type”: “keyword”

        },

        “beat”: {

          “properties”: {

            “hostname”: {

              “type”: “keyword”,

              “ignore_above”: 1024

            },

            “name”: {

              “type”: “keyword”,

              “ignore_above”: 1024

            },

            “version”: {

              “type”: “keyword”,

              “ignore_above”: 1024

            },

            “processing_timestamp”: {

              “type”: “date”

            }

          }

        },

        “input_type”: {

          “type”: “keyword”,

          “ignore_above”: 1024

        },

        “message”: {

          “type”: “text”,

          “norms”: false

        },

        “meta”: {

          “properties”: {

            “cloud”: {

              “properties”: {

                “availability_zone”: {

                  “type”: “keyword”,

                  “ignore_above”: 1024

                },

                “instance_id”: {

                  “type”: “keyword”,

                  “ignore_above”: 1024

                },

                “machine_type”: {

                  “type”: “keyword”,

                  “ignore_above”: 1024

                },

                “project_id”: {

                  “type”: “keyword”,

                  “ignore_above”: 1024

                },

                “provider”: {

                  “type”: “keyword”,

                  “ignore_above”: 1024

                },

                “region”: {

                  “type”: “keyword”,

                  “ignore_above”: 1024

                }

              }

            }

          }

        },

        “offset”: {

          “type”: “long”

        },

        “source”: {

          “type”: “keyword”,

          “ignore_above”: 1024

        },

        “tags”: {

          “type”: “keyword”,

          “ignore_above”: 1024

   }

      }

    }

  }

}

types.json

{

  “l1p_index”: {

    “mappings”: {

      “l1p_log”: {

        “_all“: {

          “norms”: false

        },

        “dynamic_templates”: [

          {

            “strings_as_keyword”: {

              “match_mapping_type”: “string”,

              “mapping”: {

                “ignore_above”: 1024,

                “type”: “keyword”

              }

            }

          }

        ],

        “properties”: {

          : {

            “type”: “date”

          },

          “ilp_trace_id”: {

            “type”: “keyword”

          },

          “beat”: {

            “properties”: {

              “hostname”: {

                “type”: “keyword”,

                “ignore_above”: 1024

              },

              “name”: {

                “type”: “keyword”,

                “ignore_above”: 1024

              },

              “version”: {

                “type”: “keyword”,

                “ignore_above”: 1024

              },

              “processing_timestamp”: {

                “type”: “date”

              }

            }

          },

          “input_type”: {

            “type”: “keyword”,

            “ignore_above”: 1024

          },

          “message”: {

            “type”: “text”,

            “norms”: false

          },

          “meta”: {

            “properties”: {

              “cloud”: {

                “properties”: {

                  “availability_zone”: {

                    “type”: “keyword”,

                    “ignore_above”: 1024

                  },

                  “instance_id”: {

                    “type”: “keyword”,

                    “ignore_above”: 1024

                  },

                  “machine_type”: {

                    “type”: “keyword”,

                    “ignore_above”: 1024

                  },

                  “project_id”: {

                    “type”: “keyword”,

                    “ignore_above”: 1024

                  },

                  “provider”: {

                    “type”: “keyword”,

                    “ignore_above”: 1024

                  },

                  “region”: {

                    “type”: “keyword”,

                    “ignore_above”: 1024

                  }

                }

              }

            }

          },

          “offset”: {

            “type”: “long”

          },

          “source”: {

            “type”: “keyword”,

            “ignore_above”: 1024

          },

          “tags”: {

            “type”: “keyword”,

            “ignore_above”: 1024

          },

          “type”: {

       "type": "keyword",

            “ignore_above”: 1024

          }

        }

      }

    }

  }

}

18.4 L1P_Index population

The L1P_Index is populated by the Logstash component of the ELK Stack. The following files (log-pipeling.txt, filebeat.yml) found at GitHub’s interop-elk repo contains a Logstash pipeline used to populate the custom L1P_Index elasticsearch index.

log-pipeline.txt

input {

  beats {

    port => “5043”

  }

}

filter {

  # if the beat is from modusbox

  mutate {

    rename => " => “[beat][processing_timestamp]”}

  }

  grok {

    match => { “message” => “

%{SPACE}%{LOGLEVEL}%{SPACE}%{SYSLOG5424PRINTASCII}%
{SPACE}%{PROG:log_source}%{SPACE}(.L1P_TRACE_ID=)?(%{UUID:l1p_trace_id})?(.(L1P_METRIC_TIMER:
(?<timername > JAVACLASS)

(?<timervalue>
|L1P_METRIC_COUNTER:
(?<countername>
|L1P_METRIC_GAUGE:
(?<gaugename>

(?<gaugevalue>
))?.*“}

  }

  date {

    match => [“log_timestamp”, “ISO8601”]

    remove_field => [“log_timestamp”, “log_source”]

  }

  # if the beat is from ripple

}

output {

  stdout { codec => rubydebug }

  if “metric” not in [tags] {

    elasticsearch{

      host => “fix_me”

      cluster => “change_me”

      protocol => “http”

      index => “l1p_index_%{+YYYY.MM.dd}”

      document_type => “l1p_log”

    }

  }

filebeat.yml

18.4 =========================== Filebeat prospectors =============================

filebeat.prospectors:

18.4 Each - is a prospector. Most options can be set at the prospector level, so you can use different prospectors for various configurations.

18.4 Below are the prospector specific configurations.

  • input_type: log

  # Paths that should be crawled and fetched. Glob based paths.

  paths:

    #- /Users/honainkhan/dev/mbox/bmgf/interop-elk/filebeat/log-samples/modusbox/interop-spsp-clientproxy.log

    - /home/ec2-user/elkwork/logs/interop-spsp-backend-services.log

    #- /var/log/*.log

    #- c:\programdata\elasticsearch\logs*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are

  # matching any regular expression from the list.

  #exclude_lines: [“^DBG”]

  # Include lines. A list of regular expressions to match. It exports the lines that are

  # matching any regular expression from the list.

  #include_lines: [“^ERR”, “^WARN”]

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that

  # are matching any regular expression from the list. By default, no files are dropped.

  #exclude_files: [“.gz$”]

  # Optional additional fields. These field can be freely picked

  # to add additional information to the crawled log files for filtering

  #fields:

  # level: debug

  # review: 1

  ### Multiline options

  # Mutiline can be used for log messages spanning multiple lines. This is common

  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [

  multiline.pattern: ‘^[[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}’

  # Defines if the pattern set under pattern should be negated or not. Default is false.

  multiline.negate: true

  # Match can be set to “after” or “before”. It is used to define if lines should be append to a pattern

  # that was (not) matched before or after or as long as a pattern is not matched based on negate.

  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash

  multiline.match: after

18.5 Transaction Details Kibana Custom Visualization

In order to access the Transaction Details Kibana custom visualization navigate to Visualize menu and look for “Transaction Details”.

alt text

The Transaction Details Visualization shows a specific transaction identified by its L1P_Trace_Id across L1P components and it displays the transaction start and transaction end timestamps.

alt text

The Transaction Details visualization is backed by the L1P_Index data structure.

To create the visualization navigate to Visualize menu in Kibana, select the type of visualization, select the Index (L1P_Index), select and organize data depending on type of visualization selected and save visualization.

18.6 Proposed PROD Architecture

Elasticsearch

Security

18.7 Additional Considerations

Utilize Filebeat given its lightweight nature compared to Logstash.
Its part of the ELK stack. Use Filebeat to ship and centralize logs.
Filebeat will feed Logstash. Logstash can still be used to transform
or enrich your logs and files.

Take full advantage of the Beat log shippers. Along with Filebeat, use
Metricsbeat, Packetbeat and Heartbeat to monitor additional aspects of
the system.

Metricbeat by default ships system metrics to elasticsearch, but there
are other Metricsbeat modules that can be configured to monitor
databases, http servers, queues, docker, plus any custom built
modules.

Utilize a Queue in the ELK architecture before Logstash to avoid
overutilization of Elasticsearch and to perform eventual Elasticsearch
upgrades without loosing any data during downtime.

High Availability, leverage a highly available queuing system from
which Logstash servers read. Elasticsearch cluster with three master
nodes.

Elasticsearch Scalability. Understand requirements and research
elasticsearch accordingly.

Data Curation. Use a Curator on a cron job to delete old indices to
avoid an elasticsearch crash. Also, optimize older indices to improve
elasticsearch performance.

Conflict Mapping. Mapping is like a database schema in Elasticsearch.
Research if this is a concern.

Security with Multi-User & Role-Based access. Understand requirements
and research options.

Log Shipping. Leverage Logstash pull module to periodically go to Mule
and other servers and pull data.

Log Parsing. Document grok expression used by Logstash to parse all
different log types involved in L1P.

Alerting framework. Identify requirement. One can be build using cron
jobs that query and generate emails based on search results.

Log archiving. Identify requirement in terms of how long to retain
logs for. Also, identify storage option, e.g. S3.

ELK Monitoring. Nagios can be used to monitor the ELK stack. Nagios
has some plugins to monitor Elasticseach. Also, need to monitor queue
size of queuing system and health of Logstash and Kibana applications.

Logstash plugins. Beats (Filebeat), Grok, Logstash Codecs (json to
plain text and vice versa), Kafka.

Keep log data protected from unauthorized access. Open Source ELK does
not provide role-based access.

Maintenance requirements. Data retention policies, upgrade, etc.

Logstash and Elasticsearch should run on different machines as they
both use the JVM and consume large amounts of memory. Cluster
Elasticsearch, use at least 3 master nodes and at least 2 data nodes.
“We recommend clustering Elasticsearch with at least three master
nodes because of the common occurrence of split brain, which is
essentially a dispute between two nodes regarding which one is
actually the master. As a result, using three master nodes prevents
split brain from happening. As far as the data nodes go, we recommend
having at least two data nodes so that your data is replicated at
least once. This results in a minimum of five nodes: the three master
nodes can be small machines, and the two data nodes need to be scaled
on solid machines with very fast storage and a large capacity for
memory.”

18.8 Using Kibana

How to use Kibana, its dashboards and query language for L1P tracing and debugging purposes.

Kibana is the ELK Stack (Elastic Stack) window into the Elasticsearch data. It allows you to monitory, query, visualize and create reports on Elasticsearch data. This document is a L1P Kibana User Guide that will show how to use Kibana for the most common L1P use cases. Kibana features used at L1P projects will be described and then specific use cases will be described in detail.

18.8.1 Kibana Dashboards and Visualizations

Kibana allows you to create visualizations based on the Elasticsearch data. Furthermore, Kibana allows you to create dashboards based on one or more visualizations.

Elasticsearch data is generated by the several components:

  • Logstash
  • Filebeat
  • Metricbeat
  • Heartbeat

For the data generated by the Beats family of shippers Kibana also contains out of the box Dashboards and Visualizations. These come prepackaged as part of the given Beat family shipper. After installing the given Beats family shipper its dashboards and visualizations can be imported.

18.8.1.1 Dashboards Import

Following are the scripts to install the Beats family shipper dashboards:

  • /usr/share/filebeat/scripts/import_dashboards
  • /usr/share/metricbeat/scripts/import_dashboards
  • /usr/share/heartbeat/scripts/import_dashboards

In order to access the visualization and dashboards that are imported by these scripts, go to Kibana and navigate to the Visualizations or Dashboards menu options at the left hand menu.

18.8.1.2 Access to Kibana via NGINX on your browser

http://EC2_INSTANCE_URL

18.8.2 L1P Kibana Use Cases

18.8.2.1 How to monitor logs?

alt text

Follow the following steps:

  1. Navigate to “Discover” menu
  2. Expand Time Range, by clicking the “Time picker” icon on top right corner
  3. Set Time Range, pick between Quick, Relative and Absolute modes and set time range
  4. Enter your search criteria (e.g. L1p-Trace-Id)
  5. Perform search
  6. Review logs returned

18.8.2.2 How to set time range?

alt text

By default, Kibana will show you logs for the last 15 min. To set the time range from the Discover Kibana page click on the area where with the “Time picker” icon on the top right corner. This will expand the “Time Range” control panel down containing three different modes to set the time range; “Quick”, “Relative” and “Absolute” as shown in screenshots below.

18.8.2.3 How to enable auto refresh of search results?

alt text

Search results can be set to auto refresh, so your search results and visualizations do not contain stale data. Optionally, you can manually refresh results by clicking “Refresh”. The Auto-refresh can be enabled by clicking the “Time picker” icon, clicking the “Auto-refresh” link and then set it to on and specify the refresh rate.

18.8.2.4 How change to which indices you are searching?

alt text

When you submit a search request, the indices that match the currently-selected index pattern are searched. The current index pattern is shown below the toolbar. To change which indices you are searching, click the index pattern and select a different index pattern. NOTE: By default, only 1 index pattern is shown, you must click the arrow to expand the index section to show you the different indexes available.

18.8.2.5 How to add/remove fields from Kibana’s Discover window to monitor logs?

alt text

Navigate to the Kibana Discover page and hover over the field you would like to add/remove from the search results table and click the add/remove button.

18.8.2.6 How to trace a L1P transaction based on its L1p-Trace-Id?

Navigate to the Kibana Discover page and enter a search criterial like:

L1p-Trace-Id=d349c18b-e4ea-4913-b2f8-0b9dcd2f2293

18.8.2.7 Examples of common L1P Kibana Queries

alt text

18.8.2.8 Quering behaviour and rules in Kiabana:

  • query behaves as unstructured text search, with some special commands, and if you get the command syntax wrong it just does an unstructured text search
  • by default it searches for entries containing any or your search terms
  • hyphen is consider as a delimiter
  • in order to search for a string literal use double quotes NOT single quotes (single quotes are ignored)
  • to search for a single filed enter field name, then a colon and then the value within double quotes
  • exists and missing are examples of commands that can be used, e.g exists:exception
  • AND and OR are case-senditive, must use upper case
  • can use parenthesis when searching for several things, e.g. exists:exception AND ( l1p_trace_id:“1614cfa4-e792-4b01-9537-2f3eb8001b5e” OR _payment_id:“1614cfa4-e792-4b01-9537-2f3eb8001b5e”
  • field names are also case-sensitive, examples of fields are l1p_trace_id and index

18.8.2.9 How to save a search, how to open a saved search and how to manage saved searches?

alt text

Kibana allows you to save a search criteria. From the Kibana Discover page just hit the “Save” link on the top right hand corner just before the “Time Picker” icon to save your search. To access your saved search, hit the “Open” link. To delete or edit your save search hit the “Open” link and then the “Manage Saved Searches” link.

18.8.2.10 How to monitor transaction duration between l1p_components?

alt text

A Kibana Visualization on top of the custom l1p_index was created for this purpose and can be accessed by navigating to Kibana Visualize page and selecting the “Transaction Details” link. NOTE: Once the trace id is contained at all component and services logs this table will correctly display the durations.

18.8.3 Reference to Kibana’s Official Documentation

https://www.elastic.co/guide/en/kibana/current/introduction.html

19 Exporting the Documentation

In Mojaloop, the source for the documentation are markdown files stored in GitHub. We use use a tool called Dactyl to convert files to from markdown (md) format to PDF format. The PDF format is the exported format we use to share offline documentation.
Overview and cross-repo documentation is in the Docs repository. Other repositories have detailed information about their contents.

19.1 Setup

See Dactyl setup to setup the tool. Dactyl has dependencies on Python and on a command line tool Prince to do part of that conversion.

git pull # the latest for all relevant repositories

All the repositories should be in the same root directory.

Because building the documentation requires md files from multiple repos, the latest files from all the repos mentioned in the dactyl-config file need to be obtained. The list of required repos may change, but currently includes all of the following:

mojaloop
central-directory
central-ledger
Docs
ilp-service
forensic-logging-sidecar
interop-ilp-ledger
interop-dfsp-directory

19.2 Build Process

19.2.1 Copy the image files

Images that are going to be included in the documentation need to be copied to a images directory. Run the script CopyImages.sh from the root directory (the one above docs)

Docs/ExportDocs/CopyImages.sh

19.2.2 Run Dactyl

From the root directory run the dactyl build command for the document. For example, to generate a full set of documents run:

dactyl_build -t all -c Docs/ExportDocs/dactyl-config.yml –pdf

To generate just the stakeholder overview run:

dactyl_build -t stakeholder -c Docs/ExportDocs/dactyl-config.yml –pdf

19.3 Build Info

Dactyl first converts all the md files to HTML. In that process it can apply common css styles and cover pages which are in the pdf_templates directory. Prince is then used by Dactyl to convert the HTML files to PDF. The –leave_temp_files parameter can be very useful for debugging if you want to see the intermediate HTML.

The files are written to the “out” directory under the root.

20 Interledger for Mojaloop

The Interledger project is a suite of protocol definitions and reference implementations that define a standard way to connect any number of disparate payment systems together into one interconnected network: an internet of value. Mojaloop uses Interledger as its settlement layer so that individual instances or deployments of Mojaloop software can eventually become interconnected not just with one another, but with all other payment systems worldwide. Interledger development is spearheaded by Ripple, with support from the W3C and various other stakeholders.

Interconnectivity Animation

Interledger provides a standard for linking disparate payment networks to one another.

Contents:

20.1 Why Interledger

20.1.1 The Right Technology

Interledger’s features and capabilities closely align with Mojaloop’s principles, and Interledger is the most advanced standard for interconnectivity at this time.

20.1.2 For the Right Context

The Bill & Melinda Gates Foundation has observed the trends, problems, and opportunities in developing countries in the world and developed principles to guide the direction of a financial system that can benefit the most people:

  • A push payment model with immediate funds transfer and same-day settlement. Interledger is compatible with a push payment model and can enable settlement within seconds, depending on the limitations of the transacting parties.
  • Open-loop interoperability between providers. Interledger is intended to be an open standard that anyone can build on, enabling innovation and interoperability without the usual boundaries
  • Adherence to well-defined and adopted international standards. Interledger is being developed by a trans-national community with open standards, in collaboration with the W3C and other standards organizations.
  • Meeting or exceeding the convenience, cost, and utility of cash. Interledger’s open standards, incredibly fast digital settlement, and open connectivity add up to a system that can be far cheaper, faster, and more convenient than cash for a wider range of transaction values.

The two remaining principles for Mojaloop are system-wide shared fraud and security protection and identity and know-your-customer (KYC) requirements. Interledger does not have specific provisions for either one of these, but it does not preclude participants from building systems that enforce such restrictions. In fact, Interledger has been designed on the assumption that providers of different types and in different contexts must have different restrictions and needs for fraud, security, and identity requirements. Mojaloop is spearheading one of the first sets of fraud detection and information sharing to be built into an Interledger compatible system.

20.2 Core Concepts

20.2.1 Ledgers

Interledger conceptualizes a ledger as a system tracking accounts and balances in a single currency. In the real world, there are systems called “ledgers” that support multiple currencies; in Interledger parlance, each supported currency in such a system would comprise a separate “ledger”. The act of sending money from one user of a given ledger to another user of the same ledger is called a transfer. A payment that can be executed by a single transfer within a single ledger does not need or use Interledger.

Transfer from Sender to Receiver on the same Ledger

20.2.2 Connectors

The Interledger project assumes that no one ledger will ever serve the whole world. Aside from the problem of scaling a ledger to serve billions of members of humanity, ledgers have different intrinsic qualities that benefit different parties; different ledgers exist today in part because their customers have not just different but mutually exclusive needs and preferences. Still, people would like to be able to pay each other even if they don’t use the same ledger:

Sender -> Ledger -> ? -> Ledger -> Receiver

Payments that cross a ledger boundary are currently hard.

Rather than trying to create one ledger to rule them all, we should make payment systems interoperable. We do this by connecting systems to each other, then bridging payments through multiple connectors using Cryptographic Proof.

Sender -> DFSP Ledger -> Connector -> IST Ledger -> Connector -> DFSP Ledger -> Receiver

Connectors link ledgers to each other. In the L1P model, all DFSPs connect to a central ledger.

The Connector is one of the core pieces of ILP software. Each connector is linked to two or more ledgers where it holds a balance, and it facilitates payments by receiving money in one ledger and paying out money in another ledger. Within a single Level One deployment, we expect that each Digital Financial Services Provider (DFSP) runs a connector pairing their home ledger to the central IST ledger, and all the ledgers are denominated in the same currency. In the greater inter-ledger world, a Connector could link two DFSPs directly, and the ledgers could be denominated in any pair of currencies; the connector sets the rate of exchange between each ledger’s native currency.

20.2.3 Cryptographic Proof

In traditional payments, each intermediary must be trustworthy. The more intermediaries, the higher the risk of a transaction failing partway through. Interledger solves this problem with the financial equivalent of a two-phase commit. Each transfer in the payment is locked by a condition value and unlocked by a fulfillment value that hashes to the condition. With interledger, each party only needs to trust the ledger or intermediary immediately adjacent in the chain, regardless of how long many transfers and intermediaries are involved.

To provide the least risk for users, ledgers should provide conditional hold functionality, such that a cryptographic fulfillment automatically executes a prepared transfer. Mojaloop’s example ledgers all implement such systems. Interledger has some other requirements for optimal operation, including authenticated best-effort messaging between users of a ledger. For details of ledger requirements and recommendations, see IL-RFC-17: Ledger Requirements.

20.2.4 Forward Holds, Backwards Execution

The typical execution pattern of an Interledger transfer is forward holds (starting with the transfer from originator) followed by backwards execution (starting with the transfer to the beneficiary). First comes a payment planning step wherein the originator asks the beneficiary for a unique cryptographic condition to which the beneficiary knows the answer. (In Mojaloop, the DFSPs provide this functionality on behalf of their customers.) The originator starts by preparing a conditional transfer to an intermediary’s account in the sending DFSP’s ledger. This intermediary is a Connector. In the Mojaloop model, each DFSP runs a Connector service that links the DFSP’s ledger to a central Interoperability Service for Transfer (IST) ledger. The Connector chooses a route and prepares a conditional transfer in a different ledger, to either another intermediary Connector or the final beneficiary. That Connector does the same, until the transfer to the beneficiary is prepared in this manner. All the transfers share the same cryptographic condition, which only the beneficiary can fulfill.

The beneficiary (or, in Mojaloop’s case, the beneficiary’s DFSP) notices that the incoming transfer with a known cryptographic condition has been prepared in the beneficiary’s preferred DFSP ledger. The beneficiary executes this transfer by revealing the cryptographic fulfillment to the DFSP ledger; at this point the beneficiary has gotten paid. The last connector sees the fulfillment in the execution notification, then uses the same fulfillment to execute the previous transfer in the chain. This continues until the first transfer executes, debiting the originator.

Each transfer needs a timeout, or else malicious actors could trick connectors into locking up money forever by proposing transfers that won’t be executed. However, with timeouts, it’s possible that some transfers of a payment execute in time while other transfers expire, especially if a particular connector or ledger has an outage. The pattern of executing transfers last-to-first protects the customer: it guarantees that the money will be credited to the beneficiary or the originator will not be debited. (In the failure case, both are true.) Connectors in the middle take the risk of losing funds in the failure case, but they can choose the timeouts of each transfer and the fees for exchanging across ledgers in order to minimize the risk. For more information, see Connector Risks.

20.3 Protocol Layers

The design of Interledger intentionally copies the design of the Internet as much as is applicable. The four Interledger layers-Ledger, Interledger, Transport, and Application-are analogous to the Data link, Network, Transport, and Application layers of the OSI model. Both models revolve around a single core protocol: Internet Protocol (IP) for the OSI stack, and the Interledger Protocol (ILP) for the Interledger stack.

Internet Stack ILP Stack
<img src=“./internet-arch.png” width=“320”> <img src=“./interledger-arch.png” width=“320”>

Another way of looking at the protocol:

Interledger protocol suite W diagram

The layers of the Interledger protocol stack are as follows:

Layer Description
Application Application-defined protocols for planning payment.
Transport Defines standard data formats for application-layer data and cryptographic ondition generation.
Interledger In this layer, connectors communicate with one another to plan the transfers involved in a payment.
Ledger In this layer, senders, receivers, and connectors communicate to the ledgers involved in the payment.

20.3.1 Application Layer

The Application Layer coordinates and prepares overall payments. User-facing applications implement protocols from this layer to prepare payments with one another. At this layer, the two endpoints of a payment communicate directly with one another. In Mojaloop, the Scheme Adapter implements a custom application layer protocol using Interledger Payment Request (IPR) format as the Transport layer. In the L1P’s custom protocol, the two DFSP’s communicate directly using HTTPS to plan a payment before preparing it.

20.3.2 Transport Layer

The Transport Layer defines how payments are identified and how to generate the cryptographic conditions for the transfers in the payment. Mojaloop uses the IPR format. For the data included in this layer, Mojaloop uses the format defined by the Interledger Pre-Shared Key (PSK) specification, which resembles HTTP headers, although L1P does not use the PSK protocol itself. L1P does not encrypt the data.

Key pieces of data that are defined in this level are:

  • The expiration time of the payment
  • The key type used to generate the unique condition and fulfillment for this payment
  • A unique nonce for the payment
  • Mojaloop “Trace ID” of a payment

Note: Unlike the OSI model, the Interledger stack does not have a hard distinction between the “Application” and “Transport” layers; any application layer protocol is closely fixed to a particular transport layer protocol. The main point of the distinction is to make it possible to implement client libraries for transport layer functionality that can be used as generic building blocks for writing application-layer protocols.

20.3.2.1 More information

20.3.3 Interledger Layer

There are two closely-related protocols in the Interledger layer: the Interledger Protocol (ILP) and the Interledger Quoting Protocol (ILQP). Connectors communicate with each other in these protocols, using ILQP to quote payments and ILP to prepare payments. (The execution happens individually for each transfer at the ledger layer.)

20.3.3.1 More information

20.3.4 Ledger Layer

The ledger layer is implemented by the unique, core ledgers of each system. In Mojaloop, these ledgers include each DFSP’s internal ledger and the IST’s central ledger. The core operations are preparing conditional transfers, executing those transfers, and sending authenticated messages to other users of the same ledger.

Each Connector must know how to use the API of the ledgers to which it is connected. Rather than having a unique API for each ledger, Mojaloop’s reference implementations all use a consistent API, called the Five Bells Ledger API. In the case of a DFSP that has an existing ledger API, either the DFSP must run an adapter to provide a Five Bells Ledger API, or the Connector must have a plugin for using the DFSP’s own ledger API.

20.4 Addresses and Routing

Within the Interledger Protocol (ILP) layer, connectors route payments according to their internal routing tables. The destination of a given ILP payment is determined by its ILP Address, a hierarchical string of alphanumeric identifiers analogous to an IP address. For example, all payments to addresses starting with private.l1p.ZZZ.dfsp1. are routed to the connector operated by DFSP 1.

Currently, Mojaloop participants are not meant to be reachable by the general public, so they use the private. prefix.

The details of routing are not specified in the protocol, but the connectors used by Mojaloop follow simple longest-prefix rules with static routing tables.

ILP Addresses are specified by IL-RFC-15: ILP Addresses.

20.4.1 Address Allocation

Each DFSP needs a unique address prefix. The following guidelines establish

  1. A standard prefix, private.l1p.
  2. A country or location prefix. In most cases, this can be an ISO 3166-1 alpha-3 code. The code ZZZ represents demo instances. If there is any doubt, the operator of the IST chooses which code to use.
  3. A unique identifier for the DFSP. The DFSP should suggest a code, and the operator of the IST should confirm that the code is not already in use by another DFSP in the same instance (that is, with the same country/location prefix). Valid characters for the DFSP’s segment identifier are alphabetic (A-Z, upper or lower case, case sensitive), digits 0-9, underscore (_), tilde (~), and dash (-). The DFSP identifier should be kept short since the entire address has to fit in 1023 characters, including any sub-account addressing or invoicing information; a good DFSP identifier should be about 20 characters or less.

20.4.1.1 Example address prefix for “DFSP 1” in a test instance

private.l1p.ZZZ.dfsp1.

The address of a customer account should be the DFSP’s prefix and the customer’s account number. (It’s acceptable to have additional dot-separated segments after the account number to indicate more information about the purpose or destination of a payment.)

20.4.1.2 Example customer account address for a “medical benefits” sub-account

private.l1p.ZZZ.dfsp1.849702568.medical

20.5 Data Formats

Interledger standards are defined using Abstract Syntax Notation One (ASN.1) as defined in ITU X.680 and encoded with Octet Encoding Rules (OER) as defined in ITU X.696. By relying on ASN.1 we can take advantage of highly sophisticated tooling which allows us to verify the integrity of our specifications and the correctness of our implementations. By encoding with OER we ensure that parsers are very simple to write, wire formats are compact and encoding/decoding performance is excellent.

20.6 More information

20.6.1 PSK Data Format

The PSK Data Format is a data structure closely resembling HTTP headers, defined for use with the PSK Interledger protocol. The same data format is used in the reference implementation of the IPR transport layer and in Mojaloop, because it provides some convenient properties. For example, it provides a public headers section so that all connectors in the middle of a payment can

20.6.2 ILP Packet

The ILP Packet is a binary data structure that should be attached to transfers (as a memo, if possible) to connect them to an Interledger payment.

  • The Interledger address of the account where the payment should ultimately be delivered
  • A 64-bit unsigned integer amount, with the scale and currency defined by the ledger where the amount is to be delivered
  • An arbitrary, opaque, variable-length binary data field

20.6.2.1 More information

20.6.3 ILP Error Format

The ILP Error Format is a binary data structure that Interledger components use to indicate a problem with executing a payment in the Interledger layer. The error format includes an error code, which is inspired by HTTP status codes, where the prefix specifies a broad category of causes and the number specifies the exact error that occurred. To distinguish Interledger error codes from HTTP status codes, Interledger errors use a letter prefix instead of a number. For example, temporary interledger errors use the prefix “T”-this is similar to HTTP status codes in the 500 range.

20.6.3.1 More information

20.6.4 Amounts

In the Interledger layer, amounts are always represented as 64-bit unsigned integers. This provides extremely predictable precision and rounding behavior. Interledger amounts cannot be negative because you cannot transfer a negative value. (That would be the equivalent of a pull payment in a push payment system.) The amount is always defined in the context of a particular ledger, specifically, the one where the receiver’s address is located. Each ledger’s interface must define a translation from its internal data format to a 64-bit unsigned integer. In the Five Bells Ledger API, the Get Metadata method handles this by reporting the currency and scale of that currency.

Two ledgers may choose different scales for representing the same currency, depending on their intended use case. For example, a ledger optimized for micropayments might have a “nanodollar precision” with a minimum amount of 10^-9 USD, while a traditional bank might set the limit at “millidollar precision” such that the minimum amount is 10^-3 USD ($0.001). In the nanodollar precision ledger, 2 USD would be represented as 2000000000 while in the millidollar precision ledger 2 USD would be 2000. Interledger’s 64-bit unsigned integer can fit very large numbers without losing precision. For example, a payment in the amount of the gross national product of the USA in 2015 (18.14 trillion purchasing-parity dollars) could be represented down to the level of 10^-6 dollars ($0.000001) without rounding. In the unlikely event that a payment requires more precision than a 64-bit integer can provide, it could be divided into two Interledger payments to different ledger prefixes that represent different scales.

In JSON data, Interledger amounts should be represented as decimal strings. Many JSON parsers assume JSON numbers have the same precision as JavaScript numbers (64-bit double-precision floating point), which cannot represent all unsigned 64-bit integers without losing precision. By representing amounts as strings, senders and receivers of JSON can serialize and deserialize the amounts using data types that can represent the full precision, or at least as much precision as is necessary for their specific purposes.

20.7 Connector Risks

Interledger guarantees that the receiver gets paid or the sender gets their money back. There are, however, some failure cases where connectors in the middle could lose money because a transfer expired before the connector could execute it. (In those cases, the receiver gets paid and the sender gets their money back, thanks to the backwards execution.) To balance out these risks, connectors should plan accordingly.

IL-RFC-18: Connector Risk Mitigations discusses general patterns connectors can follow to minimize their risk of losing money.

In the case of a Mojaloop instance, some more specific modifications are possible to further mitigate risk. These are possible because Mojaloop design involves a trusted central ledger, and each DFSP has control over its ledger and connector. These optimizations are, in short:

  • Receiver Wait & Pay - The receiving DFSP tries to fulfill the transfer on the central ledger before preparing the transfer to the final receiver.
  • Sender Check-before-Rollback - The sending DFSP checks the outcome of the transfer on the central ledger shortly after that transfer expires. The sending DFSP sets the timeout of the transfer in its own ledger such that it can execute the transfer (including possible retries) after seeing the outcome on the central ledger.
    • Or the sender checks the result on the central ledger before expiring a transfer in the sending DFSP’s ledger. If the transfer succeeded on the central ledger, the sender executes the transfer on the sender’s DFSP ledger even if that means the transfer in the sending DFSP’s ledger executes at or slightly past its expiration time.

20.8 How to Troubleshoot ILP Payment Issues

  • Consult the logs.
  • See which transfers executed and which transfer oe transfers failed.

20.9 Software Components

Components appear and are described in the following diagram and table.

Software block diagram

Component Name Summary
Scheme Adapter A custom application defined by Mojaloop that handles the planning of payments on an end-to-end basis.
ILP Service A RESTful service that acts as a client library for accessing ILP-related functions, such as getting quotes from the ILP Connector, generating conditions in IPR format, and similar tasks.
ILP Client A common library used within the ILP Service and the ILP Connector.
ILP Connector Holds money in the DFSP ledger and the Central Ledger, and sets the rates of exchange between them.
DFSP Ledger The reference ledger for a DFSP. Tracks customer balances and exposes the Five Bells Ledger API.
Central Ledger The reference ledger for a central IST. Tracks DFSP / connector balances and exposes the Five Bells Ledger API.

20.9.1 Scheme Adapter

A custom application defined by Mojaloop that handles the planning of payments on an end-to-end basis. Part of the DFSP software.

20.9.1.1 More information

20.9.2 ILP Service

A convenience application for Mojaloop that provides a handful of ILP-related functions through a RESTful(-ish) API. Functionality includes getting quotes, creating Interledger Payment Request objects (which include the cryptographic conditions), and issuing notifications when the connector detects that ILP-compatible transfers have been prepared.

20.9.2.1 More information

20.9.3 ILP Client

This is a standard library used by the ILP Service and the ILP Connector. It handles things like validating and verifying [Crypto-Conditions][]. It interfaces with the ILP Connector using the Interledger Protocol and the Interledger Quoting Protocol. It also interfaces with the reference DFSP ledgers and central IST ledger using the Five Bells Ledger API, and can be extended with plugins for other ledger interfaces.

20.9.3.1 More information

20.9.4 ILP Connector

The ILP Connector connects one DFSP’s Ledger to another ledger. For now, a DFSP’s connector always connects the DFSP to the Central Ledger. In the future, it is possible that ILP Connectors could connect two DFSP ledgers directly, and there could even be a competitive marketplace of ILP Connectors between pairs of ledgers.

The ILP Connector has accounts holding money with the ILP Ledger Adapters of each of the two ledgers it connects. (It can also connect directly to ledgers that implement ILP natively.) The connector defines the exchange rates between balances on the two ledgers.

Mojaloop uses the Interledger project’s reference implementation for a connector.

20.9.4.1 More information

20.9.5 Running the ILP Software

Ansible is used for deploying the ilp-connector and the ilp-service. The Ansible Playbook can be run with the command:

ansible-playbook -v --extra-vars="docker_username=<FILL ME IN> docker_password=<FILL ME IN> docker_email=<FILL ME IN>" --inventory-file=hosts-test ansible.yml

This command should be run from the ansible directory in this repository.

Ansible uses the SSH keys found in your normal SSH directory to log in to the servers.

The Docker credentials are those used for the private registry (modusbox-level1-docker.jfrog.io).

The Inventory File should either be the hosts-test or hosts-qa depending on whether you want to deploy the components to the L1P Test or QA environment.

20.9.6 Execution Flow

Execution flow is shown in the following diagram.

Execution Flow Diagram

20.10 Configuring High Availability Proxy

20.10.1 Introduction

Load balancing across multiple server instances is one of the amazing techniques and ways for optimizing resource utilization, maximizing throughput, and reducing latency to ensure high availability of servers in an environment where some concurrent requests are in millions from users or clients and appear in a very fast and reliable manner.

This Document explains about Installing and configuring HAProxy on AWS EC2 Instance

HAProxy is free, open source software that provides a high availability load balancer and proxy server for TCP and HTTP-based applications that spreads requests across multiple servers. It is written in C and has a reputation for being fast and efficient (in terms of processor and memory usage).

20.10.2 Install HAProxy

Install a HA Proxy in AWS EC2 Instance by Issuing the command ‘sudo yum install –enablerepo=epel haproxy’.

After the Successful Installation check the version Installed by Issuing the command ‘haproxy -version’.

     Senthilkumars-MacBook-Pro:HaProxy-Mule  senthil$ haproxy -version
     HA-Proxy version 1.7.4 2017/03/27

Create a Config file for Mule Instances

20.10.3 Sample Config file

    listen  stats
      bind *:9002
      mode            http
      log             global
      maxconn 10
      clitimeout      100s
      srvtimeout      100s
      contimeout      100s
      timeout queue   100s
      stats enable
      stats hide-version
      stats refresh 30s
      stats show-node
      stats auth admin:mypassword
      stats uri  /haproxy?stats
      stats admin if TRUE
    frontend localnodes
      bind *:9001
      mode http
      default_backend nodes
    backend nodes
      mode http
      balance roundrobin
      server DFSP1-Test ec2-52-32-100-1.us-west-2.compute.amazonaws.com:8088 maxconn 100 check
      server DFSP2-Test ec2-52-32-200-2.us-west-2.compute.amazonaws.com:8088 maxconn 100 check

After creating the configuration file for the Mule Instance place it under any any directory (Eg: /home/ec2-user/scripts/modusbox/haproxy),

Execute the following command from the directory to run the load balancer configuration

  haproxy -f haproxy_mule.cfg

After the successful execution, Now you will be able access the Mule APIs with the URL

  http://<AWS ec2 Instance>:9001/

Status of the load balancer can viewed thru the browser under the path

  http://<AWS ec2 Instance>:9002/

21 Interop Services

The various interop service APIs act as proxies and/or provide features such as validation, authentication and data transformation where necessary. The services operate based on service specifications provided in both Open API and RAML. The services run on Mule community runtime. There are four interop micro-services (recently two of them: interop-spsp-clientproxy and interop-spsp-backend-services are merged into one interop-scheme-adapter, so total of three services) that are mentioned below and several supporting projects.

There are several repositories that support non-functional requitements such as Performance, Logging, Metrics and Deployment that are discussed below as well.

Contents:

21.1 Overview

Structure of interop services

Overview of mule services

21.2 Architecture

User Message/Flow Diagram of L1P System

The diagram in this section (L1P Reference Implementation) show the positive or “happy” path of the user. Negative and boundary cases are described in other specifications. A data flow diagram is also used for threat modeling.
Overview of L1P services

21.2.1 Interfaces

  • interop-dfsp-directory - This project provides an API gateway to the IST Directory Naming Service and provides resources to - “get metadata about a directory”, “get customer information by providing identifier, identifierType”, “Register a DFSP” and “get identifierTypes supported by the central directory”
  • interop-spsp-clientproxy - This interop project fulfills the role of ilp-spsp-client proxy. This project provides an API gateway to the ilp-spsp-client Service. It supports methods to query, quoteSourceAmount, setup and payment request as specified. This is now deprecated and its functionality is covered by interop-scheme-adapter.
  • interop-spsp-backend-services - This project provides an interop API implementation that interacts with SPSP server and DFSP. Implementation of SPSP Backend server is based on ilp-spsp-server specified here. This is now deprecated and its functionality is covered by interop-scheme-adapter.
  • interop-ilp-ledger - This project provides an interop API implementation of ILP Ledger Service.
  • interop-scheme-adapter - This project provides an API gateway to the ilp-service microservice. It supports methods to query, quote,payment request as specified in the ilp-service.

21.2.1.1 Supporting projects for functionality

21.2.1.2 Projects for non-functional requirements

21.3 Testing

Test Strategy

Java Unit Tests exist for each of the projects for unit testing, some of which use WireMock framework. Tests are run as part of executing the Maven pom.xml as mvn clean package.

Along with these unit tests, additional tests can be run by using the tests present in interop-performance which include several functional and scenario tests. These include USSD tests as well as JMeter scripts that can be used for performance/load testing and cover end-to-end scenarios.

21.4 Security

Security/Threat Model for L1P Reference Implementation is here

21.5 Resilience

Resilience model for L1P Reference Implementation is here

21.6 Performance

Performance approach for Interop services and for L1P project as a whole is described here. Tools used and ways to perform analysis are also described. This can be used for use case or scenario tests as well as isolated testing of services using mocks. The same approach and scripts can be used for Load Testing as well.

21.7 Logging

Logging guidelines for L1P project were drafed and after review by partner teams, documented here. Aspects of end-to-end Tracing and support for Metrics are discussed and requirements described in the guidlines document. The configuration used and other customizations such as adding indexes can be found in the interop-elk project.

21.8 Deployment

The L1P system can be deployed using Vagrant and Ansible playbooks to create two DFSPs and one CST VMs with support for MGMT VMs to allow execution on all supported platforms, including windows. The user guide for this is here

22 Mule’s Docker Image

This docker image is based on Java’s official docker image.

Applications are incorporated into the image, which is why we need to change the dockerfile in order to do a COPY command to copy the zip file into mule’s applications folder: “/opt/mule/apps”

Mule folders for domains, configs and logs are mounted volumes so those can be mapped to host directories.

22.1 Building it

22.1.1 Parameters

There are two different parameters to build, one is the version and the other is the port that is exposed. In case a parameter is missing then the defaults are 3.8.0 for version and 8081 for ports.
The versions that are supported are: 3.8.0, 3.7.0, 3.6.1, 3.6.0 and 3.5.0

22.1.2 Command

In order to build, the following command is used replacing the <<>> with the actual values:

docker build -t <<userName>>/<<imageName>>:<<tag>> -f <<dockerFileName>> <<dockerBuildPath>>

username is only required if the image needs to be pushed to docker hub.
If dockerfile is named “Dockerfile” then there is no need to specify the -f <<dockerFileName>> argument.
Mule’s application zip should be saved into dockerBuildPath.

22.1.3 Final output

After this command is run, a mule image that is ready to be instantiated into a container is created.

22.1.4 Running a container from the image created

The following command starts and runs a container:

docker run -d -p 8081:8081 -v <<hostFolder>>:<<containerVolumeFolder>> --name <<containerName>> <<imageUserName>>/<<imageName>>:<<imageTag>>

-d is indicating that it will run as a daemon.
-v is used to map a hostFolder to a container’s volume.
“imageUserName:/”imageName“:”imageTag" is mandatory and specifies which image to use for the new container.

22.1.4.1 Example

docker run -d -p 8081:8081 -v ~/hostMuleLogs:/opt/mule/logs --name myMuleContainer modusbox/mule:latest

The following command can be used to log into a container:

docker exec -ti <<containerName>> bash

23 Using Logging

23.1 Introduction

This document provides Mojaloop services logging guidelines in order to provide end-end traceability of interactions, aid in troubleshooting and publish metrics to the backend

23.1.1 Desired Goals

  • End-to-end Traceability of a particular transaction
  • Understand service behavior
  • Debuggging
  • Metrics for a particular transaction

23.1.1.1 General:

All logs statement must begin with ISO8601 compliant timestamp. The timestamp must have millisecond resolution. It should be follwed by a log level. Available logs levels are
ERROR, WARN, INFO, DEBUG For example

2017-04-28T17:16:20.561Z INFO ilp-routing:routing-tables debug bumping route ledgerA: mojaloop.dfsp1.  ledgerB:   nextHop: mojaloop.ist.dfsp2

23.1.1.2 1. End-to-end Traceability:

To provide end-to-end traceability of Level One payment related interactions, L1P components shall include L1p-Trace-Id in all log statements where available. The following
snippet must be included in all of the log lines: L1p-Trace-Id=<current_trace_id> For a given unit of work, related interactions between services,
the L1p-Trace-Id is required to be unique. It is recommended that UUID be used for uniqueness requirement. This would allow to quickly retrieve
all logging for a given L1p-Trace-Id.

23.1.1.3 2. Rest Service Calls:

All Rest Service calls must include L1p-Trace-Id as a header HTTP Header. The value of this header must be set to Payment ID for all payment interactions.
For all non-payment interaction the originating DFSP must generate and use an UUID for the value of the header. In the case where the L1p-Trace-Id is not present
as a header, an error needs to be logged with context about the call with the missing header. The service must set the L1p-Trace-Id header with appropriate value
whether the current interaction is during the course of processing a payment or not.

23.1.1.4 3. Web Socket Notifications:

[To Be Filled in]

23.1.1.5 4. Addtional Context in Logs

It is recommended to log other identifiers that can help retrieve logs statement across multiple layers in the Mojaloop stack.
Some examples are Transfer Id, User ID (USSD id, email, login name), AppName, and AccountId etc. Relevant-Id=<id_value>

23.1.1.6 5. Metrics Logging

Logs can be used to publish metrics to metrics service. There are 2 types of metrics that are supported. The details about supported
metrics are available here. The snippet below shows the systax for publishing metrics:

  1. Counter

    ... L1P_METRIC_COUNTER:[counter-namespace.name] ...
    where L1P_METRIC_COUNTER is a keyword followed by a colon and the desired metric name* is within [ ]. This would increment the
    counter identified by metric name by 1.
  2. Timer

    ... L1P_METRIC_TIMER:[timer-namespace.name][50] ...

    where L1P_METRIC_TIMER is a keyword followed by a colon, the desired metric name* is within [ ] and timed value in millisenconds with [ ].
    This would add the time value in milliseconds to timer identified by metric name*.

* metric name is composed of 3 elements that separated by a period.

  1. environment which captures where the application is runnning e.g. dfsp1-test, dfsp2-qa etc.
  2. application instance id which should identify the process
  3. metric name which could contain alpahnumeric characters and could have java package name style prefix

23.2 As a Mule Developer

23.2.1 Installation and Setup

23.2.1.1 Anypoint Studio

23.2.1.2 Standalone Mule ESB

  • https://developer.mulesoft.com/download-mule-esb-runtime
  • Add the environment variable you are testing in (dev, prod, qa, etc). Open <Mule Installation Directory>/conf/wrapper.conf and find the GC Settings section. Here there will be a series of wrapper.java.additional.(n) properties. create a new one after the last one where n=x (typically 14) and assign it the next number (i.e., wrapper.java.additional.15) and assign -DMULE_ENV=dev as its value (wrapper.java.additional.15=-DMULE_ENV=dev)
  • Download the zipped project from Git
  • Copy zipped file (Mule Archived Project) to <Mule Installation Directory>/apps

23.2.2 Run Application

23.2.2.1 Anypoint Studio

  • Run As Mule Application with Maven

23.2.2.2 Standalone Mule ESB

  • CD to <Mule Installation Directory>/bin -> in terminal type ./mule

23.2.3 Test Application

23.2.3.1 Anypoint Studio

  • Run Unit Tests
  • Test API with Anypoint Studio in APIKit Console
  • Verify Responses in Studio Console output

23.2.3.2 Standalone Mule ESB

These tests describe the expected behavior of the the services separate from the interfaces. Should the interfaces change, these behaviors would be expected to continue to work. Each test is described as a simple bulleted sentence that states the test conditions and expected behavior. This has the same information you might expect in a more formal format like Gherkin, but is easier to read and review.

24 Resilience Modeling and Analysis

Wikipedia: “Failure
mode and effects analysis (FMEA) . . . was one of the first systematic
techniques for failure analysis.”

When FMEA is applied to software services it is called Resilience
Modeling and Analysis
(RMA), see white
paper
.

It was developed by reliability engineers in the late 1950s to study
problems that might arise from malfunctions of military systems.

FMEA can be applied directly to software services to identify and rank
possible service failures. Once identified, there are standard
mitigation and testing patterns can be applied to resolve each failure.
This process is used to prevent major failures and reduce downtime.

25 What RMA is

Resilience modeling assumes that there will be failures in a system. It
doesn’t focus on increasing reliability, which is measured below as mean
time to failure (MTTF), instead if focuses on reducing time to detection
and recovery (MTTD and MTTR). By reducing time to detection and time to
recovery (the red area below), availability (the green area) is
maximized.

Availability

Figure - Availability from MTTF, MTTD, and MTTR

For RMA, developers look at the architecture data flows and ask: “what
could go wrong here, how bad will that be, and how often will that
happen?” RMA provides a prioritized list of potential faults. We extend
RMA to add one or more ways to detect, mitigate, and test the handling
for each of those faults.

RMA is a very similar process to threat modeling except that instead of
looking for threats, we look for faults and instead of using a threat
acronym like STRIDE we use DIAL:

  • D - Discovery: name resolution, configuration
  • I - Incorrectness: corruption, version mismatch, sequence errors, duplicates
  • A - failure of Authorization or Authentication
  • L - Latency; slow or no response, flooding, deadlocks, metering, timeouts

26 Standard Microservice Resilience Patterns

Because Mojaloop follows a microservices architecture there
are a group of standard potential failures that all such services have.
Because every microservice has the same issues, these issues can be
grouped together by failure mode along with standard methods of
detection, mitigation, and testing.

When we apply DIAL to microservices, we find several standard patterns
for failure.

26.1 Failure Pattern #1 - Low Resources

A low resource condition is common to all software. It has the advantage
that you can often detect and correct the problem before the failure
occurs.

Example failures:

  • Low memory
  • Low disk space
  • Excessive CPU
  • Peak network traffic

Detection

Use a system monitoring tool (Ex: AWS, Nagios, App Analytics, Sensu, New
Relic, SCOM, etc.)

Two stages:

  1. Yellow: Raise event when resource is getting low and before it’s a
    problem

  2. Red: Alert when the resource is critically low or gone

For each microservice we create a table. Here’s an example:

Resource Green Yellow Red
CPU <80% >80%, 1-minute average > 95%, 10-minute average
Disk <80% full 80 to 95% full > 95%
Memory <80% available memory utilized 80 to 95%, 3-minute average > 95%, 5-minute average
Network <80% network, capacity 5-minute average 80 to 95%, 5-minute average > 95%

In Mojaloop, we make use of the ELK stack and Metricbeats
for gathering system data. This makes the data available to any number of
alerting systems.

Mitigation

Graceful degradation: system continues to function, but some
functionality may temporarily stop. As an example, in our case, new
money transfer prepare requests might be slow or rejected while the services processes existing fulfillment work.

Fault Injections

There are many standard tools to fill disk space, allocate large amounts
of memory, hog CPU cycles, and throttle the network.

26.2 Failure Pattern #2 - Service is down

Example Failures:

  • Microservice down
  • Mule down
  • DB/SQL down

Detection

In our case, each microservice implements a health endpoint which
returns an http 200 if the service is up. Microservice may implement a
JSON return to indicate that the service is degraded (yellow/warning
status). The health service works by doing a simple internal check of
the service.

We use the free Mule runtime which can be extended on-site for a license
fee for monitoring and dashboards to cover this or another monitoring
service may be used.

Service failures also are detected by a calling service when that
service receives a connection failure (ex: 404). This works for
dependent services that don’t implement a health service. These failures
are logged and picked up by the logging engine (ELK stack) where they
can be integrated with a monitoring service.

Mitigations

If service down is detected, the first mitigation is to restart the
service. We use ansible playbooks for service startup.

Restart the service in a known good configuration. We use Ansible
playbooks to deploy and configure the services. These can be run to
redeploy and restart all the stateless services or restart the stateful
ones. Restarting the service should generate a configuration change
event.

Additional failover processes may be deployed in production.

The monitoring service should alert an operator when the service has
been down for a threshold amount of time.

Fault Injections

Stop the service

26.3 Failure Pattern #3 - Health Modeling

26.3.1 What is Health Modeling?

Health
modeling

is included here as a part of resilience, but it has a larger role in
helping operations maintain the service. Health modeling answers. “what
state is the service in, and what action can I take to correct it”. In
services, a health model defines a pattern that typically looks like
“check the service state, attempt to fix it automatically if it’s
broken, alert the operator if we can’t”

The first part of heath modeling is defining the actionable states of
the system.

A very simple health model might have three states: broken, slow, and
working. The most general form of this model is a finite state model or
petri net showing the three states and all possible transitions between
them:
Basic Health Model
where the transitions are typically events that come from log events or
health checks. Ex: The transition from Working to Broken might be
“health check doesn’t return 200”.

In a simple model like this one, where the severity of the problems can
be stack ranked, it’s easy to model the system as a chain of
responsibility pattern:

If (health state doesn't return 200) then broken

if (health state returns 200 with "slow" in JSON) then slow

else working

This kind of pattern is very easy to code and test. You can have any
number of if/then statements in this kind of model, and multiple
consecutive statements can lead to the same state. State checks are
ordered from worst outcome to best. If any statement is true, the chain
stops.

The second part of the health model is recovery process. This is also an
ordered set of operations that can be shown as one or more chain of
responsibility patterns - either for the entire system or for a group of
states within it. Example:

if (broken for more than 5 minutes) alert operator

If (broken for more than 2 minutes) then raise event and run Ansible playbook to redeploy service

If (broken) then raise event and run Ansible playbook to restart service

if (slow and # of services > N) then alert operator

if (slow) then raise event and run playbook to add additional microservice

if (working and more than 1 service and a service is idle) then run playbook to scale down services

Describing the actions like this makes it easy to automate the responses
and understand what should happen when problems occur.

27 A General Microservice Health Model

An advantage of microservices is that every microservice has the same
kinds of possible states and transitions. A general health model can
apply to most of the operations in any microservice. Once we have that
model we only need to worry about special cases specific to our service.

Our simple health model has only four states (in order):

  • Stopped - the service is stopped or unresponsive
  • Misconfigured - a catch-all state for something has gone wrong
    that the code can’t automatically fix. We don’t support
    auto-rollback of a new deployment, but If you want to you can add
    “Mis-deployed” state above this one to cover the case where a new
    deployment has been done N minutes ago yet is still in the
    misconfigured state.
  • Slow - performance is below an acceptable threshold
  • Working

The main difference between the simple health model example above and
our model is the addition of the misconfigured state. Below are details
for handling each state.

27.0.0.1 Stopped Service

Example Failures:

  • Microservice down
  • Mule down
  • DB/SQL down

Detection

In our case, each microservice implements a health endpoint which
returns an http 200 if the service is up. Microservice may implement a
JSON return to indicate that the service is degraded (yellow/warning
status). The health service works by doing a simple internal check of
the service. We use the free Mule runtime which can be extended on-site
for a license fee for monitoring and dashboards to cover this or another
monitoring service may be used.

Service failures also are detected by a calling service when it receives
a connection failure (ex: 404). This works for dependent services that
don’t implement a health service. These failures are logged and picked
up by the logging engine (ELK stack) where they can be integrated with a
monitoring service.

Mitigations

    If the service has been down for N + M minutes alert an operator
    Else, restart the service using an ansible playbook

Additional failover processes may be deployed in production. Restarting
the service generates a configuration change event.

Fault Injection

Stop the service to test that the service will be restarted.

27.0.0.2 Configuration error examples

Many possible failures lead to the misconfigured state. In all cases,
the configuration error detection can come from a logged message since
the service is running and logging, it’s just not communicating. That
message should have a log type to indicate that there’s a config error.
The type of checks will depend on what communication methods the
microservices supports. Here’s a list:

Http

  • Auth: major security failure. Unable to call upstream service and/or
    all clients can’t get data
  • Misconfigured URL
  • Misconfigured network
  • API version mismatch

Web sockets

  • Multiple clients on same socket
  • Port not open or configured (ex: in Docker)
  • Socket not configured (DFSP initiates)
  • Major version mismatch
  • client access auth failure - client service logs config error
  • error on notify - receiver logs error (we may consider retries here before failing)

SQL

  • Misconfigured Connection string
  • Misconfigured network

General - infrastructure version incorrect (ex: OS, Docker,
JScript)

Besides all the config failures, it can be helpful if the service has a
“configuration good” log event that gets fired after startup or a
configuration change. This allows the model to know when a state has
returned to “working”. Since we deal with configuration problems
manually, this is not required, but in an automated setup it would be
needed.

Detection

  • Run a scheduled test to ping the health service
  • Listen for actionable log messages marked with the configuration
    type

Mitigation

Use the mitigations above for restarting the service, but add this check
in the middle:

  • If the service has been down for N minutes. Use an Ansible playbook
    to redeploy and configure the service. A stateful service can be
    also reinstalled, but leaving the existing data volume untouched.

Fault Injections

  • Change Http or socket config
  • Change client or server auth
  • Force update of component to incorrect version or configuration

27.0.0.3 Slow Service

If performance is below an acceptable threshold the health state will
return that. The model here follows the example above

Detection

  • if (health state returns 200 with “slow” in JSON) then slow

Mitigation

-   if (slow) then raise event and run playbook to add additional
    microservice
-   if (working and more than 1 service and a service is idle) then run
    playbook to scale down

Fault Injection

  • Use a tool to load the processor

28 Mojaloop Specific Health Modeling

Mojaloop has two additional potential faults that need to be
addressed that could cause a participating DFSP to lose money. These are
overloaded ledgers and dropped messages.

28.1 Overloaded Ledger

Example: Payer sends \$100. Payee DFSP agrees and starts fulfilment.
The payer ledger is overload and doesn’t resolve the transfer in time,
however, it’s been fulfilled by the payee DFSP and the center.
Payer DFSP losses \$100 during settlement.

A similar problem happens if the central ledger doesn’t resolve the transfer. Then the payee DFSP can be out the \$100 during settlement.

Detection

Track remaining time on all transfers. If a transfer is not reported
before the timeout window (either fulfilled, rejected, or cancelled), then
it’s delinquent and needs to be checked on.

Mitigation

1) Thread priority: Ledgers handle fulfillments before preparing new transfers.
2) Query on timeout: If payer DFSP hasn’t explicitly heard a “fullfil”, “reject”, or “cancel” message at the end
of a transfer timeout, it queries the center to get the current status of the transfer.
The payee DSFP can check the status at anytime, such as before a settlement window or
on restart of it’s services after a failure.

Extending this pattern: This solution treats the central ledger as the source of truth. It can be extended to multiple
hubs where the transfer goes through many hops before getting to the final destination. This works following
an eventual consistency model the same as above but with multiple central hubs. Each central hub acts exactly
the same way and uses the same transfer ID for the transfer. At the end of a timeout,
if any participant doesn’t know the status, they check and retry the next participant up the chain.
Whenever and participant changes a ledger they send status both up and back down the chain.
On the downside, they should get a corresponding change notification. If that doesn’t happen they retry.

Fault Injection

Take down the payee ledger adapter

28.2 Messages dropped

As with the ledger overload problem, if the ledger notifications are
missed or dropped the payer or payee ledger can lose money.

Detection

  1. Payer DFSP detects when no message is received within the timeout
  2. Payee DFSP doesn’t recieve an ACKnowledge and fulfillment notification from the central ledger.

Mitigation
1) Query on timeout: The payer DFSP can resolve this with the same mechanism as an overloaded ledger.
2) Wait for Notify Fulfillment message: The payee DFSP would lose money if it fullfils a transfer, but doesn’t deliver
the notification of it to the central ledger. To mitigate this, the payee DFSP doesn’t
notify the payee or hand any money out until it recieves an fulfillment notification from the central ledger. The payee DFSP
can choose to check on a transfer status with the center, but it is not recommended to do
this for every transfer.
3) Retries (w/idempotent writes): the ILP-Connector may retry fulfillment messages automatically (opt-in) if
there’s no response. For this to work properly, the ledgers must implement idempotent writes on the
transfer ID.

Fault Injection

Drop/block the fulfillment notifications

29 Account Management Tests

Account management is internal to the DFSP service. It is tested and verified via the USSD interface.

29.1 Use Cases

29.1.1 Add account

A user can have zero or more accounts. The first account is made when the use signs up with the DFSP (see user management tests). Adding an account will add additional accounts for that user in the same DFSP.

The Account Name and Is Primary (Y/N) are required inputs. When a new account is created it shows up in the switch accounts list.

Currently, the account name is unique within DFSP. You can’t create an identical account name for two different users. This is a convenience for the current implementation, and not tested. This restriction can be removed without loss of functionality.

29.1.2 Close account

  • Secondary accounts can be removed.
  • A non-signatory doesn’t have the option to close an account. This is manually verified.
  • The primary account can’t be closed (gives an error message when you try).
  • An account can’t be closed if there is money in it. There’s an error message when it’s attempted.
  • The account signatory can close an empty account.

29.1.3 Switch account

  • USSD shows the message: “You don’t have any other account to switch to” when there is only one account.
  • A user can switch between available accounts when there is more than one.

29.1.4 Primary (default) Account

  • When there is more than one account, the primary account is where money will be sent.
  • A new account can be made primary when it is created. The previous primary account will become a secondary account.
  • A secondary account can be made primary. The previous primary account will become a secondary account.

29.1.5 Additional holders (users)

The original holder for the account is considered the owner. For now, there is no option to change owners. Additonal holders can be signatory or non-signatory.

29.1.5.1 Add additional holder to account

  • Non-signatory can’t add a user. There’s no option for this and it is manually verified.

  • A signatory for the account can add other users to an account as a holder of that account. If a user is added to someone else’s account, that user can see the new account in their list of accounts to switch to. They can switch to the account and do things like see the balance.

29.1.5.2 Remove additional holder from account

  • The original owner (first user) of the account can’t be removed. (requires remove user from DFSP). This gives an error when attempted.

  • A non-signatory can’t remove anyone. There is no option to do this and it is manually verified.

  • A signatory can remove another user from an account who is not the owner.

29.1.6 Account Permissions

  • The owner is a signatory and they have same menus (manually verified). An owner can’t be made non-signatory (error).

  • A signatory can change another holder, who is not an owner, from signatory to non-signatory and vice-versa.

  • A non-signatory holder can’t send money or manage accounts. They can send invoices, look at the mini-statement, and account balance (manually verified).

    29.1.6 Customer Management Tests

29.2 Association of a phone to a customer

A phone has an identifier that the DFSP uses to associate the phone with a customer.

  • If a customer connects using a phone with an unassociated identifier then the customer is asked to create an account.

  • If the customer connects with a phone that has been associated with a customer, the customer recieves a menu of account actions.

There is currently no way to associate another phone with the same customer.

29.3 Add customer

A customer is identified by at least their name, birthdate, and an ID such as their national ID number. Adding a customer requires this data and returns the user number from the central directory service. Adding a customer creates at least one account for that customer in the DFSP.

  • If customer already registered at that DFSP, then attemping to add the customer again from another phone (same name, birthdate, and ID) returns an error.

  • If the fraud service returns 100, customer isn’t added (error).

  • A new customer can add themselves to the DFSP. The customer is registered in central directory associated to that DFSP.

  • Different customers can be registered with same name and/or birthdate.

  • The same customer can be registered with multiple DFSPs, though they will have different customer numbers.

29.4 Remove customer

  • If the account owner closes the last account (see account management), the customer account closed at DFSP, and no customers can connect to that account. The customer is no longer associated with the DFSP in central directory and the phone will again ask for an account to be created.

29.5 Change password

Changing password functionality is not currently implemented [#420].

  • Simple passwords are not allowed (all one number, straight runs, too short) [Not implemented: #331]
  • A customer has a single password for all accounts in a DFSP. This is a convenience for our implementation, and not tested.

29.6 Account Types

  • There are several types of accounts that may be created: customer, agent, and merchant. A customer has only one account type and it is set when the customer is added. Currently there is no option to change account types or have different account types for for a single customer.

29.6.1 Merchant accounts

  • Merchant can send pending transactions, others can’t send pending transactions but can approve them (manual verified).

29.6.2 Agent accounts

  • Agents start with two accounts: the main one and the commission account.

  • Commission account can’t be closed (no option exists, manually verified)

  • The commission account can’t be made the primary (error)

  • Agents have the option to do Cash In and Cash Out transfers, others don’t. The commission account can’t send money, cash-in, cash-out, or pending transactions (manually verified).

    29.6.2 DFSP Management

DFSP’s can be registered and put on hold in the central directory. Doing so enables and disables the DFPS from conducting trasfers with the central ledger.

These operations are done through (restful API calls to the central directory)[https://github.com/mojaloop/Docs/blob/master/CentralDirectory/central_directory_endpoints.md]

29.7 Add DFSP

Registers the DFSP with the central directory.

29.8 Pause DFSP

Not yet implemented. This should cause all calls for that DFSPs users to return “unknown” and all pending transfers for that DFSP to be cancelled at the center. It could be called by a DFSP for itself or a regulator at the center.

29.9 Unpause the DFSP

Not yet implemented. Would renew normal operations for the DFSP.

30 Fee tests

Below we list the equivalence classes that make of the test combinations in the test matrix.

30.1 Variations

30.1.1 Fee Source

  • Sender fee
  • Receiver fee
  • Agent cash-out fee
  • Agent cash-in fee
  • Central fee

30.1.2 Path for transfer

  • Cross-DFSP
  • Same DFSP (should not apply center fee)

30.1.3 Configure Amount

  • Stair-step: flat fee plus percent for range
  • Zero

30.2 Test Matrix

Using pair combinations of the variations we get a matrix like this:

Path Source Receiver Center Agent Cash In Agent Cash Out
Cross-DFSP 0 Stair-step 0 Stair-step 0
Cross-DFSP Stair-step 0 Stair-step 0 Stair-step
Same-DFSP Stair-step Stair-step 0 0 Stair-step
Same-DFSP 0 0 Stair-step Stair-step 0
* 0 * Stair-step 0 Stair-step
* Stair-step * 0 Stair-step 0
Cross-DFSP * * * Stair-step Stair-step
* * * * 0 0

* value doesn’t matter

30.3 Validations

For each variation verify that the fees can be:

  • Configured
  • Shown to the sender (it’s enough to check the quote return from the scheme adapter)
  • Deducted from the transfer
  • Itemized for settlement (for these last two it’s enough to check central ledger)

    30.3 Forensic Logging tests

30.4 Stop service when not logging

If the forensic logging service stops the service(s) connected to it immeditately detect this and also stop.

When the forensic log service is started it logs a restart to the log.

30.5 Show complete data for a single transfer

A forensic log can be built across the entire path of the transfer that shows passage of a single transfer (by Transfer ID) and it’s state at each time, including retries and down services.

30.6 Show complete data by time period and customer

A forensic log can be built across the entire path of the transfer that shows the passage of the all the transfers for a give user during a time period. The time period is defined from the point of view of one service, such as the center and the transfers are shown that are valid for that service in the time period.

30.7 Show when a log has been tampered with

In each case show that part of the log (the line or a section) has been tampered with and the parts that are not tampered with

  • Change a row
  • Change order of rows
  • Drop a row
  • Add a row

    30.7 Pending transacton tests

    Merchants are able to send a pending transaction. These show in their pending transaction list till resolved. Anyone can approve or reject a pending transaction sent to them.

30.8 Send pending transaction

Assumes the user is a merchant

  • (x) Send invoice for 0
  • (x) Send for valid amount to non-existent customer, get error
  • Send pending transaction for a valid amount to valid customer

30.9 Approve pending transaction

  • Approve a transfer, the money and fees are transfered from approver’s account. The principle goes to the pending transfer sender.
  • (x) Approve a transfer when the amount exceeds the user’s balance, get error and transaction is not sent or rejected.

30.10 Reject pending transaction

  • Reject the proposed transfer. Notification goes back to sender.

    30.10 Send Money Tests

As part of the Level One principles, the customer must be able to see at least the name of the person or business they are sending their money to and the full cost of the transfer, broken out by principle and total fees, before they approve sending money. Money can only be sent (pushed) not debited (pulled).

30.11 Variations

Instead of listing every case, we list the equivalence classes for variations that can be done when sending money. These include positive cases, positive and negative boundary cases, and invalid cases. In all positive cases, the fulfillment should be recorded in the ledgers of both the payee and payer DFSPs and the central ledger. Under no cases should the payment not be represented correctly in all three ledgers, though for some negative cases, the matching will require services to be restarted or connections to be reestablished.

Combinations of some equivalence classes should be tried. In general, negative cases, marked (x), are not combined with other variations unless mentioned and should have an error message.

30.11.1 Destinations

  • Payer and Payee are on the same DFSP
  • Payer and Payee are on separate DFSPs
  • (x) Invalid Destination customer

30.11.2 Customers

Combine with destinations

  • Same customer
  • Different customer
  • Same customer, different account but same DFSP

30.11.3 Test Matrix

This test matrix condenses the positive cases above into two simple tests. Other variations are covered below and in other tests.

Destination Customer
Same DFSP Same customer different account
Different DFSP Same customer but different ID

30.11.4 Amount to send

Amounts don’t need to be combined with other variations.

  • 1
  • Some
  • Exact account balance
  • (x) More than account balance due to fees
  • (x) More than account balance

30.11.5 DFSP Limits

Limits to the number of transfers in a day, the maximum account size, or the maximum transfer amount are configuration limits implemented at the DFSP and don’t need to be combined with tests of other services.

If there is a limit on the number of transfers, then a cancelled or rejected transfer still counts against a customer limit. Refunds are separate transfers initiated by the DFSP that do not count against a customer limit.
Sending to yourself on the same DFSP shouldn’t be counted toward any limits.

Configuration: Verify limits can be set to both none and some amount. Changing the limits should be logged to the forensic log.

Set the maximum number of transfers and transaction size low (2) and try:

  • The maxmimum number of transfers in a day
  • (x) One more than the maximum number of transfers per day
  • Send the maximum tansaction size
  • (x) Send more than the maximum transaction size
  • exceeding limits to yourself (should work)

30.11.6 States

A transfer can be one of several states. Some of the states have multiple ways they can occur. Each of these variations needs to be tested, but don’t need to be combined with other variations.

To be able to test the cancel states we need to be able to hold the sending of a the payment messages in each service till after the timeout. Likewise, to test the rejection states we need to be able to force a rejection from the center or the DFSPs.

Here are the list of possible states::

  • Unknown
    • Preparing and within timeout. This is part of the normal flow, it tested in regular end to end tests.
    • After timeout, but not notified. This happens when a service is down or a message is dropped and is tested in the resilience tests below.
  • Cancelled (timeout)
    • Payer DFSP timeout. After the quote the payer DFSP doesn’t send the prepare till it has already timed out. If it attempts to send it anyway, the center should reject the prepare and the sender should show a cancellation to the customer.
    • Center timeout during prepare. The center sits on the payment till it times out.
    • Payee DFSP timeout. The receiver sends a cancel notification.
    • Center timeout during fulfill. The center acknowledges the fulfill message, but sends a cancellation for the notification to both sender and receiver.
    • Final Payee timeout. In this state the transfer is already fulfilled. Even if the timeout occurs after reciept by the sender but before the sender ledger handles it, the sender DFSP should process the transfer.
  • Rejected
    • Payer DFSP rejects (ex: fraud or insufficient account funds)
    • Center rejects (ex: insufficient settlement funds)
    • Payee rejects (ex: fraud)
  • Fulfilled

See settlement tests below for additional validations.

30.11.7 Thread contention/Sequence errors

  • Remove a destination user after the quote but before the transfer is recieved be the destination DFSP. This should result in a rejection of the transfer by the Payee DFSP.

30.11.8 Time skew

  • Verify time skew is not relevant by setting each service on different dates and sending money. It is expected that the cross-service logs will show odd times.

30.11.9 Resilience

Despite the failures listed below, no ledger should lose money and the transfer should eventually succeed when the failure is resolved. These are negative tests and not combined with other variations.

  • Message failures. The failures occur when messages are dropped or not sent
    • Halt fulfillment notification messages for center
    • Halt fulfillment notification messages for payee DFSP
    • Halt fulfillment notification messages for payer DFSP.
      Transfer should go through due to retries when the connection is re-established.
  • Service failures. In each case the payment should complete afer the service is restarted.
    • Take down payee DFSP after quote
    • Take down center DFSP before and after prepare
    • Take down payer ledger adapter after prepare
    • Take down client when a transfer is unknown (is retry initiated by the DFSP when the client is restarted?)
  • Verify Idempotence - cause retries and verify only 1 transaction on ledgers for both DFSPs and the center.

30.11.10 Settlement

To support deferred net settlement, the central ledger can easily list:

  • Fulfilled transfers
    • with fees broken out for separate accounts.
  • Balances by DFSP
  • Cancelled and rejected transfers
  • Unknown expired transfers

The firt two are tested as part of fees testing. The latter two should be tested during state testing.

30.12 Refunds

Refunds are not currently implemented.

In this system all transfers are final, so a a refund is a second transfer in the opposite direction for the original amount including both principle and fees. It contains data to link it to the original transfer it is negating for auditing purposes.

DFSPs typically do not charge fees for the refund.

The refund is marked as such so that the central ledger can report on it appropriately.

Refunds may be charged a central fee if that is charged to every other transfer, which the DFSP can choose to pass on to the customer or not.

31 User Discovery

In order for a sender to make a payment to a receiver they must “discover” some things about the receiver, such
as their ILP address and the currency of their account.

Following this, further information may be required by the sending system such as the full name, public key or
even a photograph of the receiver for validation by the sender.

31.1 Flow

  1. The sender (End User) has an identifier for the receiver such as an account number, mobile number, or national identity number.
  2. The sending system (DFSP) uses this identifier to discover the information required to:

a. Show appropriate info on the phone of the sender (name of receiver, currency code of receiving account etc.)

b. Initiate the Interledger Protocol Suite (ILP) transfer (ILP address of receiving account, amount, condition etc.)

c. Get a quote for the transfer.

  1. The sender confirms the transfer and initiates it by way of the sending DFSP.

31.2 Design Considerations

  1. There are a wide variety of identifiers that may be used to initiate the discovery process.
  2. A lot of the data required is gathered during the Simple Payment Setup Protocol (SPSP).
  3. The deployment scenarios will differ and as such the registries of identifiers will have different architectures
    (for example, distributed versus centralized).

31.3 Service Discovery vs Data Discovery

Rather than discovering data about the receiver, a more extensible solution uses the receiver identifier to resolve a service endpoint. The service at this endpoint can be standardized (this will be the SPSP entry-point), and in
future this service may be extended to provide additional functionality and data.

By decoupling the discovery of the service from the service itself we define distinct phases in the preparation of an ILP transfer: discovery and setup.
These phases can be defined by distinct entry and exit gates, and the specific implementations can be changed as long as the input and output of each phase is in a consistent format.

Using such an architecture, it is also possible to host the service at a URL that does not reveal any data about the receiver, allowing public discovery systems to be used without compromising the receiver’s privacy.

Example of privacy protecting SPSP receiver URL:

It is assumed that access to the SPSP receiver endpoints will be subject to policies that are designed to protect receiver
data privacy (that is, either not exposed on the public internet or protected behind an effective authorization system).

31.4 Setup Phase

The setup phase is handled by SPSP. The protocol requires that some entity (which doesn’t have to be the receiving DFSP) hosts a receiver endpoint at which the sender can query an SPSP Server for the data required to setup a transfer. All that is required to
initiate setup using SPSP is the URL of the receiver endpoint.

31.5 Discovery Phase

Working back from the requirements to start the setup phase, it follows that the discovery phase must simply return a URL.

31.5.1 Normalization of Identifiers

Since the inputs to the discovery phase are only loosely typed as “an identifier” it may be useful to be specific and call this a URI.

Identifiers that do not have a natural URI form can usually be converted to one (or one can be defined for them).
Where an identifier is provided to a sending system that is not a URI it is the responsibility of the sending system to determine the correct form based on the context and, if required, through interaction with the user.

Example 1

  • Sender provides the identifier +26 78 097 8763
  • Sending system recognizes this as an E164 format number and converts it to the URI tel:+26780978763.

Example 2

  • Sender provides the identifier bob@dfsp1.com
  • Sending system prompts the user to specify if this is an email address or an account identifier and then converts the identifier to the form mailto:bob@dfsp1.com or acct:bob@dfsp1.com.

This normalization allows a more rigid definition of a discovery service such that any service that accepts URIs and returns URLs could be used to resolve SPSP receiver endpoint URLs from receiver identifiers.

31.5.2 Discovery of SPSP Receiver Endpoint URL

Given all that has been defined to this point we can define a discovery service simply as a service that takes a URI representing a receiver identifier as input and returns a URL that should be the receiver endpoint for an SPSP server providing services for that receiver.

Sending systems (DFSPs) should determine which discovery service to use based on the URI scheme of the identifier.

The rules for this mapping (URI scheme/identifier type to discovery service) should be defined as part of each deployment.
Mojaloop should provide implementations of one or more discovery services to bootstrap ecosystems where no such thing exists, but it should be possible for a DFSP to be configured to use other discovery services as long as they meet the minimum requirement of
resolving a URL from a URI.

In deployments where all discovery is done through the same service (for example, a central directory) the logic for processing different identifier types can be deployed as part of that service.
Therefore it will be unnecessary for the sending system (DFSP) to be capable
of calling different services based on the identifier type.
While this is an optimization that may be possible for such a deployment, removing this logic from the DFSP will make introducing new discovery services in the future more difficult unless they are always proxied through the central service.

The logical steps for a sending DFSP are:

  1. Get receiver identifier.
  2. Normalize identifier to a URI if required.
  3. Determine which discovery service to use based on URI scheme.
  4. Resolve URI to URL using discovery service.
  5. Initiate SPSP at resolved URL.

31.6 Design Considerations

The Mojaloop project has some specific design constraints and assumptions which drive the design of this implementation. The project
favors the use of a central directory for discovery but also as a proxy for the quoting session with DFSPs. While this means it is not
necessary for the discovery and setup to be decoupled, maintaining this architecture future-proofs the solution for deployments where
these constraints and assumptions no longer hold.

31.6.1 Central Directory

The central directory will host a simple lookup service that resolves a receiver identifier to an SPSP URL.
It should host different endpoints for each identifier type so that these can easily be changed in future if required and so the logic to differentiate between identifier types is built into the DFSPs from the start.

Example

  • tel:+26123456789 -send-lookup-query-to-> https://ist.ng/api/tel
  • acct: -send-lookup-query-to-> https://ist.ng/api/acct

The central directory lookup service will return a URL in the response for any identifier lookup. The URL is the one to use to initiate an SPSP session.

To optimize this process, in a deployment where the SPSP endpoints will be hosted by the IST, the URLs will use the same host (domain) as the lookup services. This will allow the client software at the sending DFSP to re-use the underlying connection for efficiency.

An HTTP session-based authorization model that is shared by both the lookup service and SPSP service will also mean that the client
is able to re-use its authorized HTTP session further optimizing this process.

Using HTTP/2 in this architecture should further optimize this process.

It is expected that the individual account details will not be held at the IST but that the IST will provide an SPSP proxy service to
the DFSPs blinded behind a non-descriptive SPSP endpoint URL.

Example

Step 1. Sender wishes to send money to +26 123 4567 (receiver identified by mobile number)

Step 2. Sending DFSP queries Central Directory for SPSP endpoint

GET /api/lookup/tel?identifer=tel%3A%2B261234567 HTTP/1.1
Host: ist.ng

Step 3. Central directory resolves identifier (account at DFSP1) and returns a local URL that is a proxy to the SPSP Server at DFSP1.

HTTP/1.1 200 Success
Content-Type: application/json
{
  "spspReceiver" : "https://ist.ng/api/spsp/2911ca95-7bab-4699-b23a-6c64c03f3475"
}

Note: The URL gives nothing away about which DFSP is being proxied.

Step 4. Sending DFSP initiates SPSP session at https://ist.ng/api/spsp/2911ca95-7bab-4699-b23a-6c64c03f3475

Note: Both the lookup API and the SPSP endpoint are hosted at the IST.

32 LICENSE

Copyright © 2017 Bill & Melinda Gates Foundation

The Mojaloop files are made available by the Bill & Melinda Gates Foundation under the Apache License, Version 2.0
(the “License”) and you may not use these files except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, the Mojaloop files are distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

33 PKI Guide


33.1 Introduction

In this guide, we will introduce some features of CloudFlare PKI – cfssl
CFSSL is a tool developed by CloudFlare. It’s both a command line tool and an HTTP API server for signing, verifying and bundling TLS certificates. It requires GO 1.6+ to build. In this guide, we use the command line tool as the example.

33.1.1 Background

Secure Channels enable confidentiality and integrity of data across network connections. In the context of Mojaloop, a secure channel can be made possible by the implementation of service transport security via TLS to protect data in-transit and enable mutual authentication. The centralization of trust in a TLS implementation is provided through a Public Key Infrastructure (PKI). Note: While the Central KMS may serve as a PKI as the Central Services evolve, an existing internal or hosted PKI can provide the management and distribution of certificates for service endpoints.
TLS helps mitigate a number of identified threats discovered during Mojaloop Threat Modeling exercises:

  • Tampering: Network traffic sniffing and or manipulation across DFSP, Pathfinder and Central Services

  • Spoofing:
    1. Rogue DFSP pretends to be another DFSP at central directory
    2. False connector subscribes to notifications for transfers
    3. Notifications are sent by a party other than the central ledger
    4. Rogue KMS requests a health check or log inquiry to Forensic Logging Sidecars
    5. Data manipulation of REST calls
  • Information Disclosure:
    1. A false connector or 3rd party connector subscribes to notifications that are not theirs
    2. Inappropriate use of Cryptography (including side-channel weaknesses)
  • Elevation of Privilege:
    1. Credential Exposure by DFSP
    2. Credential Exposure by Customer
    3. Credential Exposure by Central Services Employee

33.1.2 Rationale

The implementation of TLS is a deployment-specific consideration as the standards, configurations and reliance on a PKI are best defined by the implementor. Mojaloop team has demonstrated a PKI/TLS design which may be configured and implemented to meet the needs of a deployment scenario through the use of the CloudFlare PKI Toolkit. This toolkit provides a central root of trust, an API for automation of certificate activities and configuration options which optimize the selection of safe choices while abstracting low-level details such as the selection and implementation of low-level cryptographic primitives. An introduction to this toolkit with safe examples for the generation and testing of certificates is found below.

33.1.3 Install cfssl

To install the cfssl tool, please follow the instructions for Cloudflare’s cfssl

33.1.4 CA Config

33.1.4.1 Initialize a certificate authority

First, you need to configure the certificate signing request (csr), which we’ve named ca.json. For the key algorithm, rsa and ecdsa are supported by cfssl, but you need to avoid using a small sized key.

{
  "hosts": [
    "root.com",
    "www.root.com"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
    "C": "US",
    "L": "Des Moines",
    "O": "Dwolla, Inc.",
    "OU": "Mojaloop",
    "ST": "Iowa"
    }
  ]
}
 ```
 Then you need to generate a cert and the related private key for the CA.
 ```
 cfssl gencert -initca ca.json | cfssljson -bare ca -
 ```
You will receive the following files:

ca-key.pem
ca.csr
ca.pem

* ```ca.pem``` is your cert.
* ```ca-key.pem``` is your related private key, which should be stored in a safe spot. It will allow you to sign any cert.

#### Run a CA server
To run a CA server, you need the ```ca-key.pem``` and ```ca.pem``` files from the first step, and a config file, ```config_ca.json```, for the server.

{
“signing”: {
“default”: {
“auth_key”: “central_ledger”,
“expiry”: “8760h”,
“usages”: [
“signing”,
“key encipherment”,
“server auth”,
“client auth”
],
“name_whitelist”: “\.central-ledger\.com$”
}
},
“auth_keys”: {
“central_ledger”: {
“key”: “0123456789abcdef”,
“type”: “standard”
}
}
}
*auth_keyis the token used to authenticate the client's CSR. *expiryis the valid time period for the cert. A year is around 8760 hours. *name_whitelistis the regular expression for the domain names that can be signed by the CA. To run the server:
cfssl serve -ca=ca.pem -ca-key=ca-key.pem -config=config_ca.json -port=6666

The default IP and port number is: 127.0.0.1:8888.

### Client Config
To generate a certificate for the client, you will need a config file -- ```config_clients.json ``` for cfssl.

{
“auth_keys” : {
“central_ledger” : {
“type” : “standard”,
“key” : “0123456789abcdef”
}
},
“signing” : {
“default” : {
“auth_remote” : {
“remote” : “ca_server”,
“auth_key” : “central_ledger”
}
}
},
“remotes” : {
“ca_server” : “localhost:6666”
}
}
* The authentication token inauth_keysmust match the one in the server. * The server address inremotesmust match the real server address. You will also need another config file --central_ledger.jsonfor the service.
{
“hosts”: [
“www.central-ledger.com”
],
“key”: {
“algo”: “ecdsa”,
“size”: 256
},
“names”: [
{
“C”: “US”,
“L”: “Des Moines”,
“O”: “Mojaloop”,
“OU”: “leveloneproject-central-services”,
“ST”: “Iowa”
}
]
}
* The domain name inhostsmust match the whitelist inconfig_ca.json. * You should avoid using a small sized key inkey. To generate a certificate for the service:
cfssl gencert -config=config_clients.json central_ledger.json | cfssljson -bare central_ledger

You will receive the following files:

central_ledger-key.pem
central_ledger.csr
central_ledger.pem
central_ledger.pemwill be your service's cert, andcentral_ledger-key.pem``` will be your private key.

33.1.5 Key Suggestions

During the certificate signing requests, we suggest you avoid using small keys. The minium requirement is shown in the following table:

Signature Key RSA ECC
AES-256 >=2048 PCurves >= 256

33.1.6 Integrating the certificates with service

Once the certificates have been created, you will need to integrate them with your service. We will use central-ledger as an example here.

33.1.6.1 Server

On the server side, you’ll want to set it up so that every incoming request is checked against the cert. In the projects so far, our team has used Hapi, but it’s just as simple when using Node libraries.
Both methods are shown to make the process as straightforward as possible.

33.1.6.1.1 Setting up with Hapi

On the server side, we simply added the key and cert to a tls object. When the server connection is initialized, the tls object is added as an option.
The fs library is used to read the key and cert files.

var fs = require('fs') 
.
.
.
const tls = {
  key: fs.readFileSync('./src/ssl/central_ledger-key.pem'),
  cert: fs.readFileSync('./src/ssl/central_ledger.pem')
}
const server = new Hapi.Server()
server.connection({
  tls
})
33.1.6.1.2 Setting up with Node

This step doesn’t change much. The key and cert along with the client cert are added to an options object. When the server is created, they are added as options.
The fs library is used to read the key and cert files.

var fs = require('fs') 
var https = require('https') 
.
.
.
var options = { 
    key: fs.readFileSync('central_ledger-key.pem'), 
    cert: fs.readFileSync('central_ledger.pem'), 
    ca: fs.readFileSync('ca.pem'), 
}

https.createServer(options, function (req, res) { 
    res.writeHead(200) 
}).listen(3000)

33.1.6.2 Client

For client requests to the server, we use many of the same libraries. fs is used to read the client cert and https is used to make the requests.
The cert simply needs to be added as part of the options object under the name ca.

var fs = require('fs') 
var https = require('https') 
var options = { 
    hostname: 'localhost', 
    port: 3000, 
    path: '/', 
    method: 'GET', 
    ca: fs.readFileSync('ca.pem') 
}

var req = https.request(options, function(res) { 
    res.on('data', function(data) { 
    }) 
}) 
req.end()

34 Docs Overview

The Docs repository documents the overall architecture, component design, message flow, high level tests and an overview of the Mojaloop software.

Individual repositories in the mojaloop GitHub organization each describe component-specific details including source and APIs.

For more information on mojaloop, see the https://mojaloop.io

New developers, see the contributors guide for onboarding materials.

34.1 Mojaloop Services

The following architecture diagram shows the Mojaloop services:

Mojaloop Services

34.1.1 DFSP Service

The DFSP code is an example implementation of a mobile money provider. Customers connect to it from their mobile feature phones using Unstructured Supplementary Service Data (USSD). USSD is a Global System for Mobile (GSM) communication technology that is used to send text between a mobile phone and an application program in the network, allowing users to create accounts, send money, and receive money.

DFSP Documentation

34.1.2 Level One Client Service

The client service connects a DFSP to other other DFSPs and the central services. It has a few simple interfaces to connect to a DFSP for account holder lookup, payment setup, and ledger operations. The level one client can be hosted locally by the DFSP or in a remote data center such as Amazon.

34.1.3 Central Services

The central services are a collection of separate services that help the DFSPs perform operations on the network.

34.2 End-to-End Scenarios

The aforementioned individual services can’t alone describe how key scenarios work across the system. Therefore, for each of the Mojaloop Scenarios, we provide a technical walk through.

  1. Send Money to Anyone: scenario, walkthrough
  2. Buy Goods scenario, message flow
  3. Bulk Payment scenario, message flow

34.3 System-wide Testing

Individual services have their own tests, but the testing strategy also includes the following system-wide tests:

The Interledger Protocol Suite (ILP) is an open and secure standard that enables DFSPs to settle payments with minimal counter-party risk (the risk you incur when someone else is holding your money). With ILP, you can transact across different systems with no chance that someone in the middle disappears with your money. Mojaloop uses the Interledger Protocol Suite for the clearing layer. For an overview of how it works, see the Clearing Architecture Documentation.

34.5 About This Document

This document is a work in progress; not all sections are updated to the latest developments in the project. Sections that are known to be out of date are marked as follows:

OUT OF DATE STARTS HERE

Any text in this area is considered “out of date.” It may reflect earlier versions of the technology, outdated terminology use, or sections that are poorly phrased and edited.

OUT OF DATE ENDS HERE

35 Central Services Overview

The central services stack provides shared functions that allow scheme participants and Digital Financial Service Providers (DFSPs) to execute a several actions using a consistent communication channel. In addition, the functions of the central services promote overall health of the scheme, allowing DFSPs to participate with confidence and reliability.

The information in this section summarizes the various services that the central services stack offers:

35.1 Directory

The central directory is a set of services that allows DFSPs to register and retrieve scheme identifiers. The scheme identifier can be leveraged by DFSPs for end-user discovery. he services, APIs and endpoints enable:

  • Registering a DFSP
  • Adding an end user
  • Retrieving an end user

To view the references and available endpoints, please see the Central Directory repository.

35.2 Ledger

The central ledger is a set of services that facilitate clearing and settlement of transfers between DFSPs, including the following functions:

  • Brokering real-time messaging for funds clearing
  • Maintaining net positions for a deferred net settlement
  • Propagating scheme-level and off-transfer fees

To view the references and available endpoints, please see the Central Ledger repository.

35.3 Fraud Sharing - in-progress

The fraud sharing service offers participating DFSPs an avenue to share end user and transactional information to help promote overall health of the scheme via fraud prevention, focusing on:

  • Sharing end user and transaction information
  • Enabling DFSPs to prevent fraud, not the scheme

The service is current being delivered and will be available as an initial product offering.

35.4 Forensic Logging - in-progress

The forensic logging solution allows information required to ensure the confidentiality and integrity of the overall central services stack. Events are captured, preserved and made available to authorized inquiries. Functions of the logging mechanism include:

  • Distributed implementation and log creation/storage
  • Centralized Key Management Service (KMS)
  • Cryptographic protection of data in-transit (encryption) including proof of integrity (signing)
  • Removes a single point of failure

The service is current being delivered and will be available as an initial product offering.

35.5 Authentication

Currently the centralized services leverage basic authentication to secure interactions. A basic authentication solution was chosen to ensure a demonstration would be available while allowing adaptations for specific integrations. This solution is fully compatible with an HTTPS-based (TLS) environment.

36 Evolution of Mojaloop

Here we document the reasoning behind certain tools, technology and process choices for Mojaloop.

  • Open source - the entire project may be made open source in accordance with the level one principles. All tools and processes must be open source friendly and support an Apache 2.0 license and no restrictive licenses.
  • Agile development - The requirements need to be refined as the project is developed, therefore we picked agile development over waterfall or lean.
  • Scaled Agile Framework - there are four initial development teams that are geographically separate. To keep the initial phase of the project on track, the scaled agile framework (SAFe) was picked. This means work is divided into program increments (PI) that are typically four 2 week sprints long. As with the sprints, the PI has demo-able objective goals defined in each PI meeting.
  • Threat Modeling, Resilience Modeling, and Health Modeling - because this code needs to exchange money in an environment with very flaky infrastructure it must have good security, resilience, and easily report it’s health state and automatically attempt to return to it. To achieve this we employ basic tried and true modeling practices.
  • Automated Testing - for the most part, most testing will be automated to allow for easy regression. See the automated testing strategy.
  • Microservices - Because the architecture needs to easily deploy, scale, and have components be easily replaced or upgraded it will be built as a set of microservices.
  • APIs - In order to avoid confusion from too many changing microservices, we use strongly defined APIs. APIs will be defined using OpenAPI or RAML. Teams document their APIs with Swagger v2.0 or RAML v0.8 so they can automatically test, document, and share their work. Swagger is slightly preferred as there are free tools. Mule will make use of RAML 0.8. Swagger can be automatically converted to RAML v0.8, or manually to RAML v1.0 if additional readability is desired.
  • Services - Microservices are grouped and deployed in a few services such as the DFSP, Central Directory, etc. Each of these will have simple defined interfaces, configuration scripts, tests, and documentation.
  • Database Storage - although Microsoft SQL Server is widely used in Africa, we need a SQL backend that is open source friendly and can scale in a production environment. Thus, we chose PostgreSQL. The database is called through an adapter and the stored procedures are kept in simple ANSI SQL so that it can be replaced later with little trouble.
  • USSD - Smart phones are only 25% of the target market and not currently supported by most money transfer service, so we need a protocol that will work on simple feature phones. Like M-Pesa, we are using USSD between the phone and the digital financial service provider (DFSP).
  • Operating System - Again, Microsoft Windows is widely used in many target countries, but we need an operating system that is free of license fees and is open source compatible. We are using Linux. We don’t have a dependency on the particular flavor, but are using the basic Amazon Linux. In the Docker containers, Alpine Linux is used.
  • Interledger - The project needed a lightweight, open, and secure transport protocol for funds. Interledger.org provides all that. It also provides the ability to connect to other systems. We also considered block chain systems, but block chain systems send very large messages which will be harder to guarantee delivery of in third world infrastructure. Also, while blockchain systems provide good anonymity, that is not a project goal. To enable fraud detection, regulatory authorities need to be able to request records of transfers by account and person.
  • MuleSoft - For the most part, the mule server is simple a host and pass through for Level One Client API calls. However, it will be necessary to deploy Mojaloop system into existing financial providers. MuleSoft provides an excellent adapter so that the APIs can be easily hooked up to existing systems while providing cross cutting concerns like logging, fault tolerance, and security. The core pieces used don’t require license fees.
  • NodeJS - NodeJS is designed to create simple microservices and it has a huge set of open source libraries available. Node performance is fine and while Node components don’t scale vertically a great deal, we plan to scale horizontally, which it does fine. The original Interledger code was written in NodeJS as was the level one prototype this code is based on. Most teams used Node already, so this made sense as a language.
  • NodeJS “Standard” - Within NodeJS code, we use Standard as a code style guide and to enforce code style.
  • Java - Mule can’t run NodeJS directly, so some adapters to mule and interop pieces are written in Java. This is a very small part of the overall code.
  • Checkstyle - Within Java code, we use Checkstyle as a code style guide and style enforcement tool.
  • GitHub - GitHub is the standard source control for open source projects so this decision was straightforward.
    We create a story every time for integration work. Create bugs for any issues. Ensure all stories are tracked throughout the pipeline to ensure reliable awx.
  • Slack - Slack is used for internal team communication. This was largely picked because several team already used it and liked it as a lightweight approach compared to email.
  • ZenHub - We needed a project management solution that was very light weight and cloud based to support distributed teams. It had to support epics, stories, and bugs and a basic project board. VS and Jira online offerings were both considered. For a small distributed development team an online service was better. For an open source project, we didn’t want ongoing maintenance costs of a server. Direct and strong GitHub integration was important. It was very useful to track work for each microservice with that microservice. Jira and VS both have more overhead than necessary for a project this size and don’t integrate as cleanly with GitHub as we’d want. ZenHub allowed us to start work immediately. A disadvantage is the lack of support for cumulative flow diagrams and support for tracking # of stories instead of points, so we do these manually with a spreadsheet updated daily and the results published to the “Project Management” Slack channel (Cumulutative flow is being added to Zenhub, but wasn’t available for most of the project).
  • AWS - We needed a simple hosting service for our Linux instances, and we aren’t going to use many of the extra services like geo-redundancy, since we expect customers may wish to self-host. AWS is an industry standard and works well. We considered Azure, which also would have worked, but it’s harder for the Gates Foundation to get an Azure subscription than AWS.
  • Docker - the project needs to support both local and cloud execution. We have many small microservices that have very simple specific configurations and requirements. The easiest way to guarantee that the service works the same way in every environment from local development, to cloud, to hosted production is to put each microservice in a Docker container along with all the prerequisites it needs to run. The container becomes a secure, closed, pre-configured, runnable unit.
  • CircleCI - to get started quickly we needed an online continuous build and testing system that can work with many small projects and a distributed team. Jenkins was considered, but it requires hosting a server and a lot of configuration. CircleCI allowed for a no host solution that could be started with no cost and very limited configuration. We thought we might start with CircleCI and move off later if we outgrew it, but that hasn’t been needed.
  • Artifactory - After the build we need private repository to put our NodeJS packages and Docker containers until they are formally published. Docker and AWS both do this, and any solution would work. We chose Artifactory from JFrog simply because one team already had an account with it and had it setup.
  • SonarQube - We need an online dashboard of code quality (size, complexity, issues, and coverage) that can aggregate the code from all the repos. We looked at several online services (CodeCov, Coveralls, and Code Climate), but most couldn’t do complexity or even number of lines of code. Code Climate has limited complexity (through ESLint), but costs 6.67/seat/month. SonarQube is free, though it required us to setup and maintain our own server. It gave the P1 features we wanted.
  • DropBox - Intermediate and planning documents need a simple shared set of folders. They don’t need versioning or tracking like GitHub offers. The share should have integration with Slack. We considered SharePoint, Syncplicity, Box, and others. DropBox was already used by most teams and the Gates Foundation already had an account for it, so it was simply the easiest to go with. It’s also the most full-featured choice and integrates well with Slack.
  • Markdown - Documentation is a deliverable for this project, just like the code, and so we want to treat it like the code in terms of versioning, review, check in, and tracking changes. We also want the documentation to be easily viewable online without constantly opening a viewer. GitHub has a built-in format called Markdown which solves this well. The same files work for the Wiki and the documents. They can be reviewed with the check in using the same tools and viewed directly in GitHub. We considered Google Docs, Word and PDF, but these binary formats aren’t easily diff-able. A disadvantage is that markdown only allows simple formatting - no complex tables or font changes - but this should be fine when our main purpose is clarity.
  • Draw.io - We need to create pictures for our documents and architecture diagrams using an (ideally free) open source friendly tool, that is platform agnostic, supports vector and raster formats, allows WYSIWYG drawing, works with markdown, and is easy to use. We looked at many tools including: Visio, Mermaid, PlantUML, Sketchboard.io, LucidChart, Cacoo, Archi, and Google Drawings. Draw.io scored at the top for our needs. It’s free, maintained, easy to use, produces our formats, integrates with DropBox and GitHub, and platform agnostic. In order to save our diagrams, we have to save two copies - one in SVG (scalable vector) format and the other in PNG (raster). We use the PNG format within the docs since it can be viewed directly in GitHub. The SVG is used as the master copy as it is editable.
  • Dactyl - We need to be able to print the online documentation. While it’s possible to print markdown files directly one at a time, we’d like to put the files into set of final PDF documents, where one page might end up in more than one final manual. Dactyl is a maintained open source conversion tool that converts between markdown and PDF. We originally tried Pandoc, but it had bugs with converting tables. Dactyl fixes that and is much more flexible.
  • Ansible - We need a way to set microservice configurations and monitor/verify that those configuration are correct. We need the tool to be very simple to setup and use. It must support many OS’s and work for both the cloud and local environments as well as Docker and non-Docker setup. We looked at many tools including: Chef, Puppet, Cloud Formation, Docker Compose/Swarm, Kubernetes, Terraform, Salt, and Ansible. Chef and Puppet were eliminated because they have a very large learning curve and large setup requirements. AWS Cloud Formation and Docker were both limited to specific environments and we need broader support. Terraform was a good tool, but works differently with each environment, so a configuration for cloud can’t be reused locally. Kubernetes is also good, but is designed to send commands across large scale environments, not configure a few specific microservices. Salt and Ansible can both do the job. Salt is more scalable and performant, as it puts an agent on each server to orchestrate config. Ansible is much simpler, having an agentless direct setup. We went with Ansible because of the simple setup and learning curve. We don’t need the speed and scale Salt provides and accept a slightly lower performance. Ansible allows us to define the expected state of the microservice in a playbook. If the state is incorrect, Ansible can automatically alert and correct the state. In this way it is both a monitoring and configuration tool.

37 About Mojaloop Scenarios

Mojaloop addresses a number of scenarios in which this software might help the poor to meet their financial and banking needs. There are several different paths for each of these scenarios, including potential timeout issues and reversals (which are handled as a separate transaction). The most common paths include:

37.1 Scenario Descriptions

37.1.1 Send money to anyone

Hamim is working on his farm in Northern Uganda when he receives an urgent phone call from his brother, Kani. Kani is low on money and will not get paid until next week. Kani has no money to pay for food for his family and asks Hamim to help him out. Hamim and Kani have no means of transportation and it would take several days for Hamim to get to his Kani’s home in Southern Uganda. While they both have mobile flip phones, they use different financial service providers. Kani tells Hamim that he needs 5,000 shillings to buy food until he gets paid next week for his job working in a local field. Hamin agrees to send Kani the money.

        *** Mojaloop technology does its job ***

Because Hamim has sent money to Kani before he has his information on his phone. Hamim sees Kani’s name come up on his phone and he starts the transaction. He also sees the total fees and any exchange rates he has to pay before he sends the money. He is happy for that validation and that the transaction goes the same way every time. In under 30 seconds, Hamim is able to send the money to his Kani and verifies that he got it. Hamim is happy he was able to help out Kani and his family so quickly so they can buy food.

37.1.2 Buy Goods - Pending Transactions

Venya is waiting in line to buy plantains at her local market. She is corralling her elder child with one hand and has her baby in a sling. She often comes to this seller and she knows he has a good price. She also knows that even though she carries no money and he is not on her financial network, she can buy from him. As she approaches the head of the line she juggles the children and pulls out a simple flip phone. She tells him 1.5 kilograms and he tells her the price, which she agrees to.

        *** Mojaloop technology does its job ***

Because she’s been here before the merchant already has her information on his phone. The only information he has is her user number. This makes Venya feel safe that the merchant does not have her mobile phone number. The merchant enters in the amount for the plantains. Almost instantaneously, Venya sees the merchants invoice on her phone and she is glad she is able to pay for the transaction using her mWallet account. She is happy that the transaction goes the same way every time, because half of her attention is on the children. She has friends who can’t read and they are able to buy things this way too by following the simple order of the transaction. In under 30 seconds, she is able to send the money to the merchant and both Venya and the merchant get confirmation of the transaction. She tells the elder child to pick up her plantains and makes room for the next person in line.

37.1.3 Bulk Payments

Nikisha is the accountant for one of the largest manufacturing companies in Johannesburg and employs over 250 workers. Their company uses a time and attendance system to track the number of hours that each employee works along with their hourly rate and employee ID. On a weekly basis Nikisha gets an update on her bulk payment system that shows all the employees, their user ID along and amount to be paid. Since the companies employees all have different financial service providers this system makes it really easy for Nikisha to confirm and distribute their pay with a couple of clicks. The company has a high turnover rate so new employees who get their first paycheck are automatically prompted to open an account as long as they provided a valid mobile number when they started. As Nikisha gets ready to send out this week’s payments she opens up a bulk payment report.

*** Mojaloop technology does its job ***

The bulk report for payments comes up by date range and, since Nikisha does this weekly, there are several items she needs to verify each time. Specifically, she looks for any errors or alerts for employees with invalid phone numbers, names not matching or other anomalies. Nikisha has the ability to follow-up with her co-workers or employees directly to fix these errors before sending out the payments. In addition, Nikisha is also notified of any employees who don’t have an account setup. For these users, Nikisha is still able to push a payment through and the employee will be prompted by text message to open an account. Nikisha is thankful she has this process that makes it much easier to distribute funds. Once Nikisha has completed her validation, she sends it to her supervisor for final approval. Nikisha is glad to have this system in place because several years ago, Nikisha and her supervisor had to pay employees in cash and use a manual system to verify payments were received which made her feel very uneasy.

37.1.4 Tiers/Risk Levels

Salem works as an auditor for a large bank in Kampala, Uganda. His job is to monitor, manage and reduce risk for the company. order to do so each new user ID in the system is assigned a default tier level which means they can only transfer a small number and amount of funds in and out of the system over specific periods of time. As users acquire greater balances in their accounts and hold their accounts for longer periods of time their tier levels improve. Tier levels are handled automatically by the system so Salem does not need to worry about assigning or changing these levels under normal circumstances. Part of Salem’s job as an auditor is to review the daily reports to ensure that everyone’s funds are safe and secure and he kicks off his daily report to review the accounts.

*** Mojaloop technology does its job ***

This morning when Salem reviews these reports he notices that one specific user ID has 32 outgoing transactions in one day which exceeds their daily count of 25. This seems very suspicious to Salem and he goes ahead and contacts the customer. It turns out that this customer is a local merchant that owns a store. The merchant explains that he has to go to the market on a weekly basis to get ingredients for his restaurant and it is not uncommon for him and his staff to make more than 25 purchases in one day. Although this customer has only been with Salem’s bank for a month then have a healthy balance in their account. Salem goes ahead and upgrades the customers tier level to increase the daily and weekly transaction counts.

37.1.5 Fraud Checks and Blacklists

Salem works as an auditor for a large bank in Kampala, Uganda. His job is to monitor and stop any fraudulent activity for the company. While the company has a set of rules that might flag individuals for Salem to investigate, he also has the authority to screen any user ID for fraudulent activities at any time. Each time Salem performs a fraud check on a user ID, the system records the date of the last check along with any comments that Salem might have made. This makes it very easy for Salem to search for IDs that might have never been checked for fraud or have not been checked in a very long time. Salem has been monitoring one particular ID that seems to have had an increased amount of incoming funds deposited into their account on a daily basis. Today he does a search of the user ID to investigate further.

*** Mojaloop technology does its job ***

When the user ID is retrieved Salem is able to see the customer’s name, birthdate and national ID number. He also sees any additional IDs and the account type associated with this customer. Upon further inspection Salem sees once again the number and amount of transactions deposited into this account has doubled again today. Salem suspects that this user is involved in some illegal activity and would like to send this up to his supervisor to get someone to do a deeper investigation. In the meantime to ensure that the illegal funds don’t continue to come into the system, Salem decides to ‘freeze’ the account. Salem does this by checking the blacklist button and indicating a reason for the blacklist. At this point any future deposits or withdrawals for this User ID will be denied until someone from the Bank removes them from the blacklist. Salem feels good that no additional funds that might of come from illegal or unapproved sources will be deposited into this customer account.

37.1.6 Account Management

Tadeo just bought his first mobile flip phone for him and his family to share. He is happy that he finally has a phone that he can use in emergencies, but he can also finally keep his money secure by opening up a bank account. Tadeo has never had a bank account since he lives in a very remote part of Africa with no personal means of transportation. Tadeo and his family have to rely on bartering or cash to buy any goods they need at the market. Although Tadeo is not proficient in reading, he is able to easily use his phone to setup and account for him and his family by following a couple of easy to read menu steps.

*** Mojaloop technology does its job ***

Tadeo was able to use his phone to create an mWallet account using his National ID. He was also asked to create a unique pin which made him feel secure in case him or someone in the family lost the phone. Tadeo is the primary account owner and he was able to easily create a new account for his oldest son. He was very pleased that he could have separate accounts for his son. His son is married and lives with Tadeo but does not have a phone. Since his son works it is only fair that they should be able to spend his money on goods and foods that he and his wife prefer. Tadeo also adds his wife as a user on his account. He allows his wife to be a signatory since she does most of the shopping at the local market and now has the ability to pay for goods using this phone. Tadeo is very happy that his wife no longer needs to have cash or carry barter goods to the market.

37.1.7 Check Account and POS

Jahari has a flip phone that all the family uses and he has setup different user numbers for each family member. Jahari is at the local market and needs to buy some meat for his family. Before he does, he wants to make sure he has enough funds in his account to purchase the goods and still have enough left over to set aside for future medical expenses and education. Jahari is happy that his money is secure and he is able to check his account balance anytime he needs to by simply entering his secure pin on his phone. Once he confirms his balance he will buy some goat and cow meat at the market.

    *** Mojaloop technology does its job ***

After Jahari has entered his pin on his phone he is able to see his account balance. He is also able to see to see any of his recent transactions as well as any fees that were associated with these transactions. After confirming his available funds he picks out his meat and brings it up to the merchant for payment. The merchant does a lot of business in this market and has a point of sales device. This is very helpful for Jahari and his family since they only have one phone and many times his wife or his children go to the market and do not have the phone with them. The merchant is able to enter the purchase amount on the POS device and Jahari or any of his family members securely enters their user number and reviews the transaction. Jahari confirms that the amount on the POS machine matches what the merchant verbally told him and he enters his pin to approve the transaction.

*** Mojaloop technology does its job ***

The merchant gets confirmation that he received payment and he prints a receipt for Jahari. Since Jahari has his phone with him today he also re-checks his account balance again to confirm that appropriate funds were taken from his account. Jahari is happy that this is an easy process and he can see that he has plenty of money left to set aside this month for his family to use on education or health expenses.

38 Terminology

These are the preferred terms and definitions for Mojaloop.

38.1 Details

Term Alternative and Related Terms Mojaloop Definition
Access Point POS (“Point of Sale”), Customer Access Point, ATM, Branch Places or capabilities that are used to initiate or receive a payment. Access points can include bank branch offices, ATMs, terminals at the POS, agent outlets, mobile phones, and computers.
Account Lookup System Account Lookup System is an abstract entity used for retrieving information regarding in which FSP an account, wallet or identity is hosted. The Account Lookup System itself can be hosted in its own server, as part of a financial switch, or in FSPs.
Account Number Account ID A unique number representing an account. There can be multiple accounts for each end user.
Active User A term used by many providers in describing how many of their account holders are frequent users of their service.
Addressing Directories, Aliasing The use of necessary information (account number, phone number, etc.) for a paying user to direct payment to a receiving user.
Agent Agent till, Agent outlet An entity authorized by the provider to handle various functions such as customer enrollment, cash-in and cash-out using an agent till.
Agent Outlet Access point A physical location that carries one or more agent tills, enabling it to perform enrollment, cash-in and cash-out transactions for customers on behalf of one or more providers. National law defines whether an agent outlet may remain exclusive to one provider. Agent outlets may have other businesses and support functions.
Agent Till Registered agent An agent till is a provider-issued registered “line”, either a special SIM card or a POS machine, used to perform enrollment, cash-in and cash-out transactions for clients. National law dictates which financial service providers can issue agent tills.
Aggregator Merchant Aggregator A specialized form of a merchant services provider who typically handles payments transactions for a large number of small merchants. Scheme rules often specify what aggregators are allowed to do.
Anti Money Laundering AML; also “Combating the Financing of Terrorism”, or CFT Initiatives to detect and stop the use of financial systems to disguise use of funds criminally obtained.
Application Program Interface API A software program that makes it possible for application programs to interact with each other and share data.
Arbitration The use of an arbitrator, rather than courts, to resolve disputes.
Authentication Verification, Validation The process of ensuring that a person or a transaction is valid for the process (account opening, transaction initiation, etc.) being peformed.
Authorization A process used during a “pull” payment (such as a card payment), when the payee requests (through their provider) confirmation from the payer’s bank that the transaction is good.
Automated Clearing House An electronic clearing system in which payment orders are exchanged among payment service providers, primarily via magnetic media or telecommunications networks, and then cleared amongst the participants. All operations are handled by a data processing center. An ACH typically clears credit transfers and debit transfers, and in some cases also cheques.
Bank Savings Bank, Credit Union, Payments Bank A charted financial system within a country that has the ability to accept deposits and make and receive payments into those accounts.
Bank Accounts and Transaction Services Mobile Banking, Remote Banking, Digital Banking A transaction account held at a bank. This account may be accessible by a mobile phone, in which case it is sometimes referred to as “mobile banking”.
Bank-Led Model Bank-Centric Model A reference to a system in which banks are the primary providers of digital financial services to end users. National law may require this.
Basic Phone Minimum device required for DFS
Bilateral Net Settlement System A settlement system in which participants’ bilateral net settlement positions are settled between every bilateral combination of participants.
Bilateral Netting An arrangement between two parties to net their bilateral obligations. The obligations covered by the arrangement may arise from financial contracts, transfers or both.
Bill Payment C2B, Utility payments, school payments Making a payment for a recurring service, either in person (“face to face”) or remotely.
Biometric Authentication The use of a physical characteristic of a person (fingerprint, IRIS, etc.) to authenticate that person.
Blacklist A list or register of entities (registered users) that are being denied/blocked from a particular privilege, service, mobility, access or recognition. Entities on the list will NOT be accepted, approved and/or recognized. It is the practice of identifying entities that are denied, unrecognized, or ostracized. Where entities are registered users (or user accounts, if granularity allows) and services are informational (e.g. balance check), transactional (e.g. debit/credit) payments services or lifecycle (e.g. registration, closure) services.
Blockchain Digital currency, cryptocurrency, distributed ledger technology The technology underlying bitcoin and other cryptocurrencies-a shared digital ledger, or a continually updated list of all transactions.
Borrowing Borrowing money to finance a short term or long term need
Bulk Payer An organization (or rarely, an individual), that needs to pay to many users at once.
Bulk Payments G2C, B2C , G2P, social transfers Making and receiving payments from a government to a consumer: benefits, cash transfers, salaries, pensions, etc.
Bulk Payments Services A service that allows a government agency or an enterprise to make payments to a large number of payees - typically consumers but can be businesses as well.
Bulk upload service A service enabling the import of multiple transactions per session, most often via a bulk data transfer file which is used to initiate payments. Example: salary payment file.
Bundling Packaging, Tying A business model in which a provider which groups a collection of services into one product which an end user agrees to buy or use.
Business Entity such as a public limited or limited company or corporation that uses mobile money as a service, e.g. taking bill payments, making bill payments and disbursing salaries
Cash Management Agent Liquidity Management Management of cash balances at an agent.
Cash-In Receiving eMoney credit in exchange for physical cash - typically done at an agent.
Cash-Out Receiving physical cash in exchange for a debit to an eMoney account - typically done at an agent.
Chip Card EMV Chip Card, Contactless Chip Card A chip card contains a computer chip: it may be either contactless or contact (requires insertion into terminal). Global standards for chip cards are set by EMV.
CICO Cash In Cash Out
Clearing The process of transmitting, reconciling, and, in some cases, confirming transactions prior to settlement, potentially including the netting of transactions and the establishment of final positions for settlement. Sometimes this term is also used (imprecisely) to cover settlement. For the clearing of futures and options, this term also refers to the daily balancing of profits and losses and the daily calculation of collateral requirements.
Clearing House A central location or central processing mechanism through which financial institutions agree to exchange payment instructions or other financial obligations (e.g. securities). The institutions settle for items exchanged at a designated time based on the rules and procedures of the clearinghouse. In some cases, the clearinghouse may assume significant counterparty, financial, or risk management responsibilities for the clearing system.
Closed-Loop A payment system used by a single provider, or a very tightly constrained group of providers.
Combatting Terrorist Financing CFT (Counter Financing of Terrorism) Initiatives to detect and stop the use of financial systems to transfer funds to terrorist organizations or people.
Commission An incentive payment made, typically to an agent or other intermediary who acts on behalf of a DFS provider. Provides an incentive for agent.
Commit Commit means that the electronic funds that were earlier reserved are now moved to the final state of the financial transaction. The financial transaction is completed. The electronic funds are no longer locked for usage.
Counterparty Payee, payer, borrower, lender The other side of a payment or credit transaction. A payee is the counterparty to a payer, and vice-versa.
Coupon A token that entitles the holder to a discount or that may be exchanged for goods or services
Credit History Credit bureaus, credit files A set of records kept for an end user reflecting their use of credit, including borrowing and repayment.
Credit Risk Management Tools to manage the risk that a borrower or counterparty will fail to meet its obligations in accordance with agreed terms.
Credit Scoring A process which creates a numerical score reflecting credit worthiness.
Cross Border Trade Finance Services Services which enable one business to sell or buy to businesses or individuals in other countries; may include management of payments transactions, data handling, and financing.
Cross-FX Transfer Transfer involving multiple currencies including a foreign exchange calculation
Customer Database Management The practices that providers do to manage customer data: this may be enabled by the payment platform the provider is using.
Data Protection PCI-DSS The practices that enterprises do to protect end user data. “PCI-DSS” is a card industry standard for this.
Deposit Guarantee System Deposit Insurance A fund that insures the deposits of account holders at a provider; often a government function used specifically for bank accounts.
DFSP On-boarding On-boarding a DFSP is the process of adding a new DFSP to this financial network.
Digital Financial Service Provider (DFSP) The regulated entity providing digital financial services to users. Manages the wallet for the users. May manage other types of digital assets such as savings accounts, loans etc. Depending on countries and regulations, a DFSP can be a bank, a telco, a Mobile Money Operator or some other private entity.
Digital Financial Services Mobile Financial Services Digital financial services include methods to electronically store and transfer funds; to make and receive payments; to borrow, save, insure and invest; and to manage a person’s or enterprise’s finances.
Digital Liquidity A state in which a consumer willing to leave funds (eMoney or bank deposits) in electronic form, rather than performing a “cash-out”.
Digital Payment Mobile Payment, Electronic Funds Transfer A broad term including any payment which is executed electronically. Includes payments which are initiated by mobile phone or computer. Card payments in some circumstances are considered to be digital payments. The term “mobile payment” is equally broad, and includes a wide variety of transaction types which in some way use a mobile phone.
Dispute Resolution A process specified by a provider or by the rules of a payment scheme to resolve issues between end users and providers, or between an end user and its counter party.
Domestic Remittance P2P; Remote Domestic Transfer of Value Making and receiving payments to another person in the same country.
Electronic Invoicing, ERP, Digital Accounting, Supply Chain Solutions Services, Business Intelligence Services that support merchant or business functions relating to DFS services.
eMoney eFloat, Float, Mobile Money, Electronic Money, Prepaid Cards A record of funds or value available to a consumer stored on a payment device such as chip, prepaid cards, mobile phones or on computer systems as a non-traditional account with a banking or non-banking entity.
eMoney Accounts and Transaction Services Digital Wallet, Mobile Wallet, Mobile Money Account A transaction account held at a non-bank. The value in such an account is refrred to as eMoney.
eMoney Issuer Issuer, Provider A provider (bank or non-bank) who deposits eMoney into an account they establish for an end user. eMoney can be created when the provider receives cash (“cash-in”) from the end user (typically at an agent location) or when the provider receives a digital payment from another provider.
Encryption Decryption The process of encoding a message so that it can be read only by the sender and the intended recipient.
End User Consumer, Customer, Merchant, Biller The customer of a digital financial services provider: the customer may be a consumer, a merchant, a government, or another form of enterprise.
Escrow Funds Isolation, Funds Safeguarding, Custodian Account, Trust Account. A means of holding funds for the benefit of another party. eMoney Issuers are usually required by law to hold the value of end users’ eMoney accounts at a bank, typically in a Trust Account. This accomplishes the goals of funds isolation and funds safeguarding.
External Account An account hosted outside the FSP, regularly accessible by an external provider interface API.
FATF The Financial Action Task Force is an intergovernmental organization to combat money laundering and to act on terrorism financing.
Feature Phone A mobile telephone without significant computational capabilities.
Fees The payments assessed by a provider to their end user. This may either be a fixed fee, a percent-of-value fee, or a mixture. A Merchant Discount Fee is a fee charged by a Merchant Services Provider to a merchant for payments acceptance. Payments systems or schemes, as well as processors, also charge fees to their customer (typically the provider.)
Financial Inclusion The sustainable provision of affordable digital financial services that bring the poor into the formal economy.
Financial Literacy Consumers and businesses having essential financial skills, such as preparing a family budget or an understanding of concepts such as the time value of money, the use of a DFS product or service, or the ability to apply for such a service.
FinTech A term that refers to the companies providing software, services, and products for digital financial services: often used in reference to newer technologies.
Float This term can mean a variety of different things. In banking, float is created when one party’s account is debited or credited at a different time than the counterparty to the transaction. eMoney, as an obligation of a non-bank provider, is sometimes referred to as float.
Fraud Fraud Management, Fraud Detection, Fraud Prevention Criminal use of digital financial services to take funds from another individual or business, or to damage that party in some other way.
Fraud Risk Management Also know as fraud risk management service (FRMS) Tools to manage providers’ risks, and at times user’s risks (e.g. for merchants or governments) in providing and/or using DFS services.
FX Foreign Exchange.
Government Payments Acceptance Services Services which enable governments to collect taxes and fees from individuals and businesses.
HCE A communication technology that enables payment data to be safely stored without using the Secure Element in the phone.
Identity National Identity, Financial Identity, Digital Identity A credential of some sort that identifies an end user. National identities are issued by national governments. In some countries a financial identity is issued by financial service providers.
Immediate Funds Transfer Real Time A digital payment which is received by the payee almost immediately upon the payer having initiated the transaction.
Insurance Products A variety of products which allow end user to insure assets or lives that they wish to protect.
Insuring Lives or assets Paying to protect the value of a life or an asset.
Interchange Swipe Fee, Merchand Discount Fee A structure within some payments schemes which requires one provider to pay the other provider a fee on certain transactions. Typically used in card schemes to effect payment of a fee from a merchant to a consumer’s card issuing bank.
International Remittance P2P; Remote Cross-border Transfer of Value, Cross-Border Remittance Making and receiving payments to another person in another country.
Interoperability Interconnectivity When payment systems are interoperable, they allow two or more proprietary platforms or even different products to interact seamlessly. The result is the ability to exchange payments transactions between and among providers. This can be done by providers participating in a scheme, or by a variety of bilateral or multilateral arrangements. Both technical and business rules issues need to be resolved for interoperability to work.
Interoperability settlement bank Entity that facilitates the exchange of funds between the FSPs. The settlement bank is one of the main entities involved in any inter-FSP transactions.
Investment Products A variety of products which allow end users to put funds into investments other than a savings account.
Irrevocable Non-Repudiation A transaction that cannot be “called back” by the payer; an irrevocable payment, once received by a payee, cannot be taken back by the payer.
Interoperability service for transfer IST Inter system trunk that allows for routing of payments.
Know Your Customer KYC, Agent and Customer Due Diligence, Tiered KYC, Zero Tier The process of identifying a new customer at the time of account opening, in compliance with law and regulation. The identification requirements may be lower for low value accounts (“Tiered KYC”). The term is also used in connection with regulatory requirements for a provider to understand, on an ongoing basis, who their customer is and how they are using their account.
L1P Bulk Payment Facilitator An organization that processes L1P compliant payments and resulting reports on behalf of Bulk Payers.
Liability Agent Liability, Issuer Liability, Acquirer Liability A legal obligation of one party to another; required by either national law, payment scheme rules, or specific agreements by providers. Some scheme rules transfer liabilities for a transaction from one provider to another under certain conditions.
Liquidity Agent liquidity The availability of liquid assets to support an obligation. Banks and non-bank providers need liquidity to meet their obligations. Agents need liquidity to meet cash-out transactions by consumers and small merchants.
Loans Microfinance, P2P Lending, Factoring, Cash Advances, Credit, Overdraft, Facility Means by which end users can borrow money.
M2C Merchant to Customer or Consumer.
mCommerce eCommerce Refers to buying or selling in a remote fashion: by phone or tablet (mCommerce) or by computer (eCommerce)
Merchant Payments Acceptor An enterprise which sells goods or services and receives payments for such goods or services.
Merchant Acquisition Onboarding The process of enabling a merchant for the receipt of electronic payments.
Merchant payment - POS C2B, Proximity Payments Making a payment for a good or service in person (“face to face”); includes kiosks and vending machines.
Merchant payment - Remote C2b, eCommerce Payment, Mobile Payment Making a payment for a good or service remotely; transacting by phone, computer, etc.
Merchant Payments Acceptance Services Acquiring services A service which enables a merchant or other payment acceptor to accept one or more types of electronic payments. The term “acquiring” is typically used in the card payments systems.
Merchant Service Provider Acquirer A provider (bank or non-bank) who supports merchants or other payments acceptors requirements to receive payments from customers. The term “acquirer” is used specifically in connection with acceptance of card payments transactions.
MFSP Platform Mobile financial service providers
Mobile Network Operator An enterprise which sells mobile phone services, including voice and data communication.
Money Transfer Operator A specialized provider of DFS who handles domestic and/or international remittances.
Multilateral Net Settlement Position The sum of the value of all the transfers a participant in a net settlement system has received during a certain period of time less the value of the transfers made by the participant to all other participants. If the sum is positive, the participant is in a multilateral net credit position; if the sum is negative, the participant is in a multilateral net debit position.
Multilateral Net Settlement System A settlement system in which each settling participant settles (typically by means of a single payment or receipt) the multilateral net settlement position which results from the transfers made and received by it, for its own account and on behalf of its customers or non-settling participants for which it is acting.
Multilateral Netting Netting on a multilateral basis is arithmetically achieved by summing each participant’s bilateral net positions with the other participants to arrive at a multilateral net position. Such netting is conducted through a central counterparty (such as a clearing house) that is legally substituted as the buyer to every seller and the seller to every buyer. The multilateral net position represents the bilateral net position between each participant and the central counterparty.
NDFSP national digital financial service providers
Near Field Communication NFC A communication technology used within payments to transmit payment data from an NFC equipped mobile phone to a capable terminal.
Netting The offsetting of obligations between or among participants in the settlement arrangement, thereby reducing the number and value of payments or deliveries needed to settle a set of transactions.
Non Bank-Led Model MNO-Led Model A reference to a system in which non-banks are the providers of digital financial services to end users. Non-banks typically need to meet criteria established by national law and enforced by regulators.
Non-Bank Payments Institution, Alternative Lender An entity that is not a chartered bank, but which is providing financial services to end users. The requirements of non-banks to do this, and the limitations of what they can do, are specified by national law.
Nostro Account From the Payer’s perspective: Payer FSP funds/accounts held/hosted at Payee FSP
Notification Notice to payer or payee regarding the status of a transfer.
Off-Us Payments Off-net payments Payments made in a multiple-participant system or scheme, where the payer’s provider is a different entity as the payee’s provider.
On-Us Payments On-net payments Payments made in a multiple-participant system or scheme, where the payer’s provider is the same entity as the payee’s provider.
Open-Loop A payment system or scheme designed for multiple providers to participate in. Payment system rules or national law may restrict participation to certain classees of providers.
Operations Risk Management Tools to manage providers’ risks in operating a DFS system.
Organization Non-business An entity such as a business, charity or government department that uses mobile money as a service, e.g. taking bill payments, making bill payments and disbursing salaries
Over The Counter Services OTC, Mobile to Cash Services provided by agents when one end party does not have an eMoney account: the (remote) payer may pay the eMoney to the agent’s account, who then pays cash to the non- account holding payee.
Participant A provider who is a member of a payment scheme, and subject to that scheme’s rules.
Partner Bank Financial institution supporting the FSP and giving it access to the local banking ecosystem.
Payee Receiver The receipient of funds in a payment transaction.
Payee FSP Payer’s financial service providers.
Payer Sender The payer of funds in a payment transaction.
Payer FSP Payee’s financial service providers.
Paying for Purchases C2B - Consumer to Business Making payments from a consumer to a business: the business is the “payment acceptor” or merchant.
Payment System Payment Network, Money Transfer System Encompasses all payment-related activities, processes, mechanisms, infrastructure, institutions and users in a country or a broader region (eg a common economic area).
Payment System Operator Mobile Money Operator, Payment Service Provider The entity that operates a payment system or scheme.
Peer FSP Mobile Money Platform The counterparty Mobile Money Provider’s Platform financial service provider.
PEP Politically Exposed Person. Someone who has been entrusted with a prominent public function. A PEP generally presents a higher risk for potential involvement in bribery and corruption by virtue of their position and the influence that they may hold (e.g. ‘senior foreign political figure’, ‘senior political figure’, foreign official’, etc.).
Phone Number Non identifying number associated with one or more end users as contact information for the end user. These numbers use the E.164 standard. Phone numbers are not required as a user number, though they can be used that way if a government or DFSP insists.
Platform Payment Platform, Payment Platform Provider A term used to describe the software or service used by a provider, a scheme, or a switch to manage end user accounts and to send and receive payment transactions.
Point of Sale Device Terminal, Acceptance Device, POS, mPOS Any device meant specifically for managing the receipt of electronic payments.
Posting Clearing The act of the provider of entering a debit or credit entry into the end user’s account record.
Prefunding The process of adding funds to Vostro/Nostro accounts.
Prepaid Cards eMoney product for general purpose use where the record of funds is stored on the payment card (on magnetic stripe or the embedded integrated circuit chip) or a central computer system, and which can be drawn down through specific payment instructions to be issued from the bearer’s payment card.
Processor Gateway An enterprise that manages, on an out-sourced basis, various functions for a digital financial services provider. These functions may include transaction management, customer database management, and risk management. Processors may also do functions on behalf of payments systems, schemes, or switches.
Promotion FSP marketing initiative offering the user a transaction/service fee discount on goods or services. May be implemented through the use of a coupon.
Provider Financial Service Provider, Payment Service Provider, Digital Financial Services Provider The entity that provides a digital financial service to an end user (either a consumer, a business, or a government.) In a closed-loop payment system, the Payment System Operator is also the provider. In an open-loop payment system, the providers are the banks or non-banks which participate in that system.
Pull Payments A payment type which is initiated by the payee: typically a merchant or payment acceptor, whose provider “pulls” the funds out of the payer’s account at the payer’s provider.
Push Payments A payment type which is initiated by the payer, who instructs their provider to debit their account and “push” the funds to the receiving payee at the payee’s provider.
Quoting The process a DFSP uses to ask for the fees and ILP packet from the destination
Reconcilation Cross FSP Reconciliation is the process of ensuring that two sets of records, usually the balances of two accounts, are in agreement between FSPs. Reconciliation is used to ensure that the money leaving an account matches the actual money transferred. This is done by making sure the balances match at the end of a particular accounting period.
Recourse Rights given to an end user by law, private operating rules, or specific agreements by providers, allowing end users the ability to do certain things (sometimes revoking a transaction) in certain circumstances.
Refund A repayment of a sum of money.
Registration Enrollment, Agent Registration The process of opening a provider account. Separate processes are used for consumers, merchants agents, etc.
Regulator A governmental organization given power through national law to set and enforce standards and practices. Central Banks, Finance and Treasury Departments, Telecommunications Regulators, and Consumer Protection Authorities are all regulators involved in digital financial services.
Reservation Part of a 2-phase transfer operation in which the funds to be transferred are ‘segregated’ (i.e. made unusable) for a predetermined duration, commonly governed by a timeout period, to any other transfer attempts.
Reversal The process of reversing a completed transfer.
Risk Management Fraud Management The practices that enterprises do to understand, detect, prevent, and manage various types of risks. Risk management occurs at providers, at payments systems and schemes, at processors, and at many merchants or payments acceptors.
Risk-based Approach A regulatory and/or business management approach that creates different levels of obligation based on the risk of the underlying transaction or customer.
Rollback The process of reversing a completed transfer.
RTGS Real time gross settlement
Rules The private operating rules of a payments scheme, which bind the direct participants (either providers, in an open-loop system, or end users, in a closed-loop system).
Saving and Investing Keeping funds for future needs and financial return
Savings Products An account at either a bank or non-bank provider, which stores funds with the design of helping end users save money.
Scheme A set of rules, practices and standards necessary for the functioning of payment services.
Secure Element A secure chip on a phone that can be used to store payment data.
Security Level Security specification of the system which defines effectiveness of risk protection.
Sending or Receiving Funds Making and receiving payments to another person
Settlement An act that discharges obligations in respect of funds or securities transfers between two or more parties.
Settlement System Net Settlement, Gross Settlement, RTGS A system used to facilitate the settlement of transfers of funds, assets or financial instruments. Net settlement system: a funds or securities transfer system which settles net settlement positions during one or more discrete periods, usually at pre-specified times in the course of the business day. Gross settlement system: a transfer system in which transfer orders are settled one by one.
Short Message Service A service for sending short messages between mobile phones.
SIM Card SIM ToolKit, Thin SIM A smart card inside a cellular phone, carrying an identification number unique to the owner, storing personal data, and preventing operation if removed. A SIM Tool Kit is a standard of the GSM system which enables various value-added services. A “Thin SIM” is an additional SIM card put in a mobile phone.
Smart Phone A device that combines a mobile phone with a computer.
Standards Body EMV, ISO, ITU, ANSI, GSMA An organization that creates standards used by providers, payments schemes, and payments systems.
Storing Funds Account, Wallet Keeping funds in secure electronic format. May be a bank account or an eMoney account.
Super Agent Master agent In some countries, agents are managed by Super Agents or Master Agents who are responsible for the actions of their agents to the provider.
Supplier Payment B2B - Business to Business, B2G - Business to Government Making a payment from one business to another for supplies, etc: may be in-person or remote, domestic or cross border. Includes cross-border trade.
SVA (Stored Value Account) Accounts in which funds are kept in a secure, electronice format.
Switch An entity which receives transactions from one provider and routes those transactions on to another provider. A switch may be owned or hired by a scheme, or be hired by individual providers. A switch will connect to a settlement system for inter-participant settlement.
Systemic Risk In payments systems, the risk of collapse of an entire financial system or entire market, as opposed to risk associated with any one individual provider or end user.
Tax Payment C2G, B2G Making a payment from a consumer to a government, for taxes, fees, etc.
Tokenization The use of subsitute a token (“dummy numbers”) in lieu of “real” numbers, to protect against the theft and misuse of the “real” numbers. Requires a capability to map the token to the “real” number.
Trading International Trade The exchange of capital, goods, and services across international borders or territories
Transaction Accounts Transaction account is broadly defined as an account held with a bank or other authorized and/or regulated service provider (including a non-bank) which can be used to make and receive payments. Transaction accounts can be further differentiated into deposit transaction accounts and eMoney accounts. Deposit transaction account is a deposit account held with banks and other authorised deposit-taking financial institutions that can be used for making and receiving payments. Such accounts are known in some countries as current accounts, chequing accounts or other similar terms.
Transaction Cost The cost to a DFS provider of delivering a digital financial service. This coudl be for a bundle of services (e.g. a “wallet”) or for individual transactions.
Transfer A general term for sending money. Local transfer refers to sending money within a ledger or DFSP, interledger transfers go between ledgers or DFSPs.
Trusted Execution Environment An development execution environment that has security capabilities and meets certain security-related requirements.
Ubiquity The ability of a payer to reach any (or most) payees in their country, regardless of the provider affiliation of the receiving payee. Requires some type of interoperability.
Unbanked Underbanked, Underserved Unbanked people do not have a transaction account. Underbanked people may have a transaction account but do not actively use it. Underserved is a broad term referring to people who are the targets of financial inclusion initiatives. It is also sometimes used to refer to a person who has a transaction account but does not have additional DFS services.
User Creation A process for creating an individual user in the system.
User Number User ID A number that identifies an end user. We assume it uses E.164 format. Depending on the country and DFSP, this can be a phone number, some form of DFSP provided ID or some form of national ID. The DFSPs associates the number to a phone number and a primary account number. Money is generally sent to the user number, and not directly to the account.
User On-boarding A process for creating an individual user in the system and all additional related actions such as creation of user PIN, creation of user account, KYC data capture (photo, fingerprints), etc.
USSD A communication technology that is used to send text between a mobile phone and an application program in the network.
Voucher A token that entitles the holder to a discount or that may be exchanged for goods or services.
Wallet Repository of funds for an account.
Whitelist A list or register of entities (registered users) that are being provided a particular privilege, service, mobility, access or recognition, especially those that were initially blacklisted. Entities on the list will be accepted, approved and/or recognized. Whitelisting is the reverse of blacklisting, the practice of identifying entities that are denied, unrecognized, or ostracized. Where entities are registered users (or user accounts, if granularity allows) and services are informational (e.g. balance check), transactional (e.g. debit/credit) payments services or lifecycle (e.g. registration, closure) services.
‘x’-initiated Used when referring to the side that initiated a transaction, e.g. agent-initiated cash-out vs. user-initiated cash-out"
© https://gittobook.org, 2018. Unauthorized use and/or duplication of this material without express and written permission from this author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to this site, with appropriate and specific direction to the original content.
Table