Recommendation ITU-R BT.500-15
Policy on Intellectual Property Right (IPR)
PART 1
Overview of subjective image assessment requirements
1 Introduction
2 Common assessment features
2.1 General viewing
conditions
2.1.1 General
viewing conditions for subjective assessments in a laboratory environment
2.1.2 General
viewing conditions for subjective assessments in home environment
2.1.3 Viewing
distance
2.1.4 Observation
angle
2.1.5 Room
environment-colour scheme
2.1.6 The
display
2.2 Source signals
2.3 Selection of test
materials
2.3.1 ITU-R
Test Sequences
2.4 Range of
conditions and anchoring
2.5 Observers
2.5.1 Number
of Observers
2.5.2 Observer
screening
2.5.3 Instructions
for the assessment
2.6 The test session
2.7 Presentation of
the results
3 Selection of test methods
Annex 1 to Part 1 Analysis and presentation of results
A1-1 Introduction
A1-2 Common methods of
analysis
A1-2.1 Calculation of
mean scores
A1-2.2 Calculation of
confidence interval
A1-2.3 Post-screening of
the observers
A1-2.4 Calculation of
mean scores and confidence intervals under challenging test conditions
A1-3 Processing to find a
relationship between the mean score and the objective measure of an image
distortion
A1-3.1 Approximation by a
symmetrical logistic function
A1-3.2 Approximation by a
non-symmetrical function
A1-3.3 Correction of the
residual impairment/enhancement and the scale boundary effect
A1-3.4 Incorporation of
the reliability aspect in the graphs
A1-4 Conclusions
Attachment 1 to Annex 1 The reference implementation of the method from
§ A1-2.4
Annex 2 to Part 1 Description of a common inter-change data file format
Annex 3 (informative) to Part 1 Image-content failure characteristics
A3-1 Introduction
A3-2 Deriving the failure
characteristic
A3-3 Use of the failure
characteristic
Annex 4 (informative) to Part 1 Method of determining a composite
failure characteristic for programme content and transmission conditions
A4-1 Introduction
A4-2 Programme-content
analysis
A4-3 Transmission-channel
analysis
A4-4 Derivation of composite
failure characteristics
Annex 5 (informative) to Part 1 Contextual effect
Annex 6 (informative) to Part 1 The spatial and temporal information
measures
Annex 7 (informative) to Part 1 Terms and definitions
PART 2
Description of subjective image assessment methodologies
1 Introduction
2 Recommended image
assessment methodologies
3 Remarks
Annex 1 to Part 2 The double-stimulus impairment scale (DSIS) method
(the EBUmethod)
A1-1 General description
A1-2 General arrangement
A1-3 Presentation of the test
material
A1-4 Grading scales
A1-5 The introduction to the
assessments
A1-6 The test session
Annex 2 to Part 2 The double-stimulus continuous quality-scale (DSCQS)
method
A2-1 General description
A2-2 General arrangement
A2-3 Presentation of the test
material
A2-4 Grading scale
A2-5 Analysis of the results
A2-6 Interpretation of the
results
Annex 3 to Part 2 Single-stimulus (SS) methods
A3-1 General arrangement
A3-2 Selection of test
material
A3-3 Test session
A3-4 Types of SS methods
A3-4.1 Adjectival
categorical judgement methods
A3-4.2 Numerical
categorical judgement methods
A3-4.3 Non-categorical
judgement methods
A3-4.4 Performance
methods
Annex 4 to Part 2 Stimulus-comparison methods
A4-1 General arrangement
A4-2 The selection of test
material
A4-3 Test session
A4-4 Types of
stimulus-comparison methods
A4-4.1 Adjectival
categorical judgement methods
A4-4.2 Non-categorical
judgement methods
A4-4.3 Performance
methods
Annex 5 to Part 2 Single stimulus continuous quality evaluation (SSCQE)
A5-1 Recording device and
set-up
A5-2 General form of the test
protocol
A5-3 Viewing parameters
A5-4 Grading scales
A5-5 Observers
A5-6 Instructions to the
observers
A5-7 Data presentation,
results processing and presentation
A5-8 Calibration of continuous
quality results and derivation of a single quality rating
Annex 6 to Part 2 Simultaneous double stimulus for continuous
evaluation (SDSCE) method
A6-1 The
test procedure
A6-2 The
different phases
A6-3 Test
protocol features
A6-4 Data
processing
A6-5 Reliability
of the subjects
Annex 7 to Part 2 Subjective Assessment of Multimedia Video Quality
(SAMVIQ)
A7-1 Introduction
A7-2 Explicit, hidden
reference and algorithms
A7-3 Test
conditions
A7-4 Test
organization
A7-5 Presentation and analysis of data
A7-5.1 Summary information
A7-5.2 Methods of analysis
A7-5.3 Observer Screening
A7-6 Example of Interface for
SAMVIQ (Informative)
Annex 8 to Part 2 Expert viewing protocol (EVP) for the evaluation of
the quality of video material
A8-1 Laboratory set-up
A8-1.1 Display selection
and set-up
A8-1.2 Viewing distance
A8-1.3 Viewing conditions
A8-2 Viewers
A8-3 The basic test cell
A8-4 Scoring sheet and rating
scale
A8-5 Test design and session
creation
A8-6 Training
A8-7 Data collection and
processing
A8-8 Terms of use of the expert viewing protocol
results
A8-9 Limitations of use of the
EVP results
Attachment 1 (informative) to Annex 8 to Part 2 Application of the
Expert Viewing Protocol and its behaviour in the presence of a large number of
expert assessors
PART 3
Application specific subjective assessment methodologies for image
quality
Annex 1 to Part 3 Subjective assessment of standard definition (SDTV)
television systems
A1-1 Introduction
A1-2 Viewing conditions
A1-2.1 Laboratory
environment
A1-2.2 Home environment
A1-3 Assessment methods
A1-3.1 Evaluations of
basic image quality
A1-3.2 Evaluations of
image quality after downstream processing
A1-3.3 Evaluations of
failure characteristics
A1-3.4 Image-content
failure characteristics
A1-4 Application notes
Annex 2 to Part 3 Subjective assessment of the image quality of high
definition (HDTV) television systems
A2-1 Viewing environment
A2-2 Assessment methods
A2-3 Test materials
Annex 3 to Part 3 Subjective assessment of the image quality of
alphanumeric and graphic images in Teletext and similar text services
A3-1 Viewing conditions
A3-2 Assessment methods
A3-3 Assessment context
Annex 4 to Part 3 Subjective assessment of the image quality of
multi-programme services
A4-1 General assessment
details
A4-2 Subjective image assessment
procedures for constant bit rate multi-programme services
A4-3 Subjective image assessment
procedures for variable bit rate multi-programme services
Annex 5 to Part 3 Expert viewing of the image quality of systems for
the digital display of large screen digital imagery in theatres
A5-1 Introduction
A5-2 Why a new method based on
‘expert viewing’
A5-3 Definition of expert
subjects
A5-4 Selection of the
assessors
A5-5 Test material
A5-6 Viewing conditions
A5-7 Methodology
A5-7.1 Evaluation
sessions
A5-8 Report
Annex 6 to Part 3 Subjective assessment of the image quality of
multimedia applications
A6-1 Introduction
A6-2 Common features
A6-2.1 Viewing conditions
A6-2.2 Source signals
A6-2.3 Selection of test
materials
A6-2.4 Range of
conditions and anchoring
A6-2.5 Observers
A6-2.6 Experimental
design
A6-3 Assessment methods
Annex 7 to Part 3 Subjective assessment of stereoscopic 3DTV systems
A7-1 Assessment (perceptual)
dimensions
A7-1.1 Primary perceptual
dimensions
A7-1.2 Additional
perceptual dimensions
A7-2 Subjective methodologies
A7-3 General viewing
conditions
A7-4 Test material
A7-4.1 Use of reference
video material
A7-4.2 Visual comfort
limits
A7-4.3 Discrepancies
between left and right images
A7-4.4 Range,
distribution and change in parallax
A7-5 Experimental apparatus
A7-6 Observers
A7-6.1 Sample size
A7-6.2 Vision screening
A7-7 Instructions to observers
A7-8 Session duration
A7-9 Variability of responses
A7-10 Viewers’ rejection
criteria
A7-11 Statistical analysis
Attachment 1 to Annex 7 Test materials for vision test
A7-1 Vision test