Rec. ITU-T P.808 (06/2021) Subjective evaluation of speech quality with a crowdsourcing approach
Summary
History
FOREWORD
Table of Contents
1 Scope
2 References
3 Definitions
     3.1 Terms defined elsewhere
     3.2 Terms defined in this Recommendation
4 Abbreviations and acronyms
5 Conventions
6 Crowdsourcing listening-only tests
     6.1 Database structure
     6.2 Design of experiment
          6.2.1 Crowdsourcing micro-task platform
          6.2.2 Duration of test
     6.3 Listening test procedure
          6.3.1 Listening session
               6.3.1.1 Qualification job
               6.3.1.2 Training job
               6.3.1.3 Rating job
          6.3.2 Listening environment
          6.3.3 Listening system
          6.3.4 Listening level
          6.3.5 Listeners
          6.3.6 Opinion scales
          6.3.7 Instructions to subjects
          6.3.8 Reliability check mechanisms
     6.4 Data analysis and reporting of results
          6.4.1 Data screening
          6.4.2 Statistical analysis
          6.4.3 Reporting subjective MOS values
Annex A  Absolute category rating (ACR) method
     A.1 Opinion scales
     A.2 Stimulus presentation
     A.3 Statistical analysis
Annex B  Degradation category rating (DCR) method
     B.1 Introduction
     B.2 Opinion scale
     B.3 Stimulus presentation
     B.4 Statistical analysis
Annex C  Comparison category rating (CCR) method
     C.1 Introduction
     C.2 Opinion scales
     C.3 Stimulus presentation
     C.4 Statistical analysis
Annex D  Evaluating the subjective quality of speech in noise
     D.1 Introduction
     D.2 Reference conditions
     D.3 Opinion scales
     D.4 Stimulus presentation
     D.5 Statistical analysis
Appendix I  Example of job design
     I.1 Qualification job
     I.2 Training job
     I.3 Rating job
Bibliography
<\pre>