Page 508 - AI for Good Innovate for Impact
P. 508
AI for Good Innovate for Impact
[3] Banuba, “AR Virtual Makeup Online Technology Demo,” [Online]. Available: https:// www
.banuba .com/ hubfs/ virtual -makeup -demo/ index .html (accessed: Jun. 15, 2025).
[4] Hugging Face, “Outfit Anyone: Ultra-high quality virtual try-on for Any Clothing and Any
Person,” [Online]. Available: https:// huggingface .co/ spaces/ HumanAIGC/ OutfitAnyone
(accessed: Jun. 15, 2025).
[5] Hugging Face, “Outfit Anyone in the Wild: Get rid of Annoying Restrictions for Virtual Try-
on Task,” [Online]. Available: https:// huggingface .co/ spaces/ selfit -camera/ OutfitAnyone
-in -the -Wild (accessed: Jun. 15, 2025).
[6] Github, “IDM-VTON,” [Online]. Available: https:// github .com/ yisol/ IDM -VTON (accessed:
Jun. 15, 2025).
[7] Github, “CatVTON:Concatenation Is All You Need for Virtual Try-On with Diffusion Models
,” [Online]. Available: https:// github .com/ Zheng -Chong/ CatVTON (accessed: Jun. 15,
2025).
[8] Github, “Ti-MGD: Multimodal-Conditioned Latent Diffusion Models for Fashion Image
Editing,” [Online]. Available: https:// github .com/ aimagelab/ Ti -MGD (accessed: Jun. 15,
2025).
[9] Github, “GPHT: Generative Pretrained Hierarchical Transformer for Time Series
Forecasting,” [Online]. Available: https:// github .com/ icantnamemyself/ GPHT (accessed:
Jun. 15, 2025).
[10] Pathak, Sanhita, Vinay Kaushik, and Brejesh Lall. "Single stage warped cloth learning and
semantic-contextual attention feature fusion for virtual tryon." In 2024 IEEE International
Conference on Multimedia and Expo (ICME), pp. 1-6. IEEE, 2024.
[11] Pathak, Sanhita, Vinay Kaushik, and Brejesh Lall. "MAC-VTON: Multi-modal Attention
Conditioning for Virtual Try-on with Diffusion-Based Inpainting." In 2024 39th International
Conference on Image and Vision Computing New Zealand (IVCNZ), pp. 1-6. IEEE, 2024.
[12] Zheng Chong, Xiao Dong, Haoxiang Li, shiyue Zhang, Wenqing Zhang, Hanqing Zhao,
xujie zhang, Dongmei Jiang, Xiaodan Liang. “CatVTON: Concatenation Is All You Need
for Virtual Try-On with Diffusion Models.” In 2025 International Conference on Learning
Representations (ICLR), OpenReview 2025.
[13] Choi, Yisol, Sangkyung Kwak, Kyungmin Lee, Hyungwon Choi, and Jinwoo Shin.
"Improving diffusion models for authentic virtual try-on in the wild." In European
Conference on Computer Vision, pp. 206-235. Cham: Springer Nature Switzerland, 2024.
[14] Baldrati, Alberto, Davide Morelli, Giuseppe Cartella, Marcella Cornia, Marco Bertini, and
Rita Cucchiara. "Multimodal garment designer: Human-centric latent diffusion models
for fashion image editing." In Proceedings of the IEEE/CVF international conference on
computer vision, pp. 23393-23402. 2023.
[15] Zhiding Liu, Jiqian Yang, Mingyue Cheng, Yucong Luo, and Zhi Li. 2024. Generative
Pretrained Hierarchical Transformer for Time Series Forecasting. In Proceedings of the
30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD '24).
Association for Computing Machinery, New York, NY, USA, 2003–2013.
472

