Rec. ITU-T P.1502 (01/2020) - Methodology for QoE testing of digital financial services Summary History FOREWORD Table of Contents Introduction 1 Scope 2 References 3 Definitions 4 Abbreviations and acronyms 5 Conventions 6 Test scenario under consideration 6.1 Roles and entities 6.2 Action Flows 6.3 Test parameterization and neutral starting state 6.4 Re-initialization after unsuccessful transactions 6.5 Disappeared money 6.6 Automation of tests 7 Transaction model 7.1 Person to person (P2P) mobile money (MoMo) transfer 7.1.1 Transaction description 7.1.2 Event and action flow 7.1.2.1 Involvement of the mobile network in the MoMo process 7.1.3 Phase definition 7.1.3.1 Top-level phases 7.1.4 Failure information in top-level views 7.1.5 Time corrections for human interaction 7.2 Trigger point IDs 7.2.1 Trigger point ID basics 7.2.2 Trigger point IDs used 8 End-to-end DFS KPIs 8.1 KPI abbreviations and reference 8.2 Money Transfer completion rate, MTCR 8.2.1 Functional description 8.2.2 Formal definition 8.2.3 Specific definition 8.3 Money Transfer completion time, MTCT 8.3.1 Functional description 8.3.2 Formal definition 8.3.3 Specific definition 8.4 Money Transfer False Positive Rate, MTFPR 8.4.1 Functional description 8.4.2 Formal definition 8.4.3 Specific definition 8.5 Money Transfer False Negative Rate, MTFNR 8.5.1 Functional description 8.5.2 Formal definition 8.5.3 Specific definition 8.6 Money Transfer Failed Transaction Resolution Rate, MTFTRR 8.6.1 Functional description 8.6.2 Formal definition 8.6.3 Specific definition 8.7 Money Transfer Account Stabilization Success Rate, MTASSR 8.7.1 Functional description 8.7.2 Formal definition 8.7.3 Specific definition 8.8 Money Transfer Account Stabilization Time, MTAST 8.8.1 Functional description 8.8.2 Formal definition 8.8.3 Specific definition 8.9 Money Transfer Loss Rate, MTLR 8.9.1 Functional description 8.9.2 Formal definition 8.9.3 Specific definition 8.10 Money Transfer Duplication Rate, MTDR 8.10.1 Functional description 8.10.2 Formal definition 8.10.3 Specific definition 9 Acquisition of data on DFS transactions 9.1 Overview 9.2 Primary DFS data collection modes 9.2.1 General remarks 9.2.2 Collection on paper, deferred transfer 9.2.3 Direct entry into electronic form 9.3 Data file naming 9.3.1 General file naming 9.3.2 Specific file names 9.4 Campaign logs 9.5 Handling of confirmation/information SMS (secondary information) 10 Special considerations for manually operated testing and time-taking 11 Measurements in the background 11.1 Overview and basic assumptions 11.2 Data acquired 11.3 Test cases for transport network background testing 11.4 Monitoring 12 Data validation and processing 12.1 Plausibility and validity checks 12.1.1 Tests on DFS data 12.1.2 Tests on background test data 12.1.3 Cross tests between data (after import) 12.1.4 Additional processing Annex A One-time tests A.1 Introduction A.1.1 Determine time-outs Annex B Check lists to be used in testing campaigns B.1 Introduction B.1.1 Daily, prior to beginning of tests B.1.2 At each new testing location B.1.3 Daily after completion of tests Annex C KPI/Trigger point lookup table Appendix I Device set-up for the Ghana pilot I.1 General I.2 Basic device set-up I.3 Setup for MoMo account I.4 SMS Backup & restore app I.5 Application for active network testing I.5.1 General I.5.2 Scenario used for the pilot I.6 Additional software Appendix II Naming rules, data structures and related processes used in the pilot project II.1 Naming II.1.1 General II.1.2 Teams II.1.3 Devices II.2 Team and device assignment list II.3 Notification SMS II.3.1 Transfer and data handling process II.3.2 Notification SMS Data table structure II.3.3 Assignment of primary test data and SMS II.3.4 Storage and deletion aspects of SMS on devices Appendix III Description of the Ghana pilot campaign III.1 Data collection method III.2 Event definition III.3 Mapping of acquired data to formal trigger points III.4 Background testing of the transport network Appendix IV Campaign log examples Bibliography