Page 36 - Crowdsourcing AI and Machine Learning solutions for SDGs - ITU AI/ML Challenges 2024 Report
P. 36

Crowdsourcing AI and Machine Learning solutions for SDGs



                      12  Judging the submissions


                      12.1  Common output format


                      The Challenge participants may produce the following as output:
                      •    Demo video (short, can be uploaded to the Challenge website)
                      •    Demonstration explaining the concept and solution using AI/ML in 5G.
                      •    Brief paper explaining the problem and solution, with a section explaining the relationship
                           to standards e.g. ITU-T Y.3172, Y.3173, Y.3174, and partner resources.

                      12.2  Additional output for open-source code


                      In the case that the output will be shared as open source, participants are expected to provide
                      the following, in addition to the outputs described by clause 10.1:

                      •    Final version of the code;
                      •    Reproducibility: It is recommended that participants create a docker image that contains
                           all dependencies and environments required for the algorithm to run;
                      •    ReadMe file containing the description of the algorithm;
                      •    Minimum system configuration required to run the algorithm;
                      •    Details of any data used to train the model (metadata);
                      •    Another key value add would be the alignment of open source with standards – the
                           application of standards-based ML mechanisms in 5G would be encouraged in open
                           source as part of this Challenge. Wherever applicable, outcomes of the Challenge will
                           be encouraged to be shared in an open forum as an open-source project.
                      •    Test cases and results that prove the benefits of the solution.


                      12.3  Additional output for proprietary code

                      In the case that the output is proprietary (not open source), participants are expected to provide
                      the following, in addition to the outputs described by clause 10.1:

                      •    Reproducibility: It is recommended that participants create a docker image that contains
                           all dependencies and environments required for the algorithm to run;
                      •    ReadMe file containing the description of the algorithm;
                      •    Minimum system configuration required to run the algorithm;
                      •    Details of any data used to train the model (metadata);
                      •    Test cases and results demonstrating the benefits of the solution.

                      12.4  Evaluation Criteria


                      The final criteria to be used to select winners in the Global Round and the Final Conference
                      will be published by the “Challenge Management Board” (see below).

                      The final criteria are expected to cover areas such as:

                      •    Novelty & originality
                      •    Status and maturity of technical implementation, and reproducibility.
                      •    Viability & impact on the market (practicality of the solution and significance of its impact)
                      •    Interoperability and mapping to international standards (including ITU standards).




                  28
   31   32   33   34   35   36   37   38   39   40   41