Page 201 - Kaleidoscope Academic Conference Proceedings 2024
P. 201
Innovation and Digital Transformation for a Sustainable World
model pretraining for biomedical natural language
processing. ACM Transactions on Computing for
Healthcare (HEALTH), 3(1):1–23, 2021.
[9] Iz Beltagy, Kyle Lo, and Arman Cohan. Scibert: A
pretrained language model for scientific text. arXiv
preprint arXiv:1903.10676, 2019.
[10] Bofeng Zhang, Xiuhong Yao, Haiyan Li, and Mirensha
Aini. Chinese medical named entity recognition based
on expert knowledge and fine-tuning bert. In 2023 IEEE
International Conference on Knowledge Graph (ICKG),
pages 84–90, 2023.
[11] Ning Liu, Qian Hu, Huayun Xu, Xing Xu, and Mengxin
Chen. Med-bert: A pretraining framework for medical
records named entity recognition. IEEE Transactions
on Industrial Informatics, 18(8):5600–5608, 2022.
[12] Rajesh Kumar, Abdullah Aman Khan, Jay Kumar,
Noorbakhsh Amiri Golilarz, Simin Zhang, Yang
Ting, Chengyu Zheng, Wenyong Wang, et al.
Blockchain-federated-learning and deep learning
models for covid-19 detection using ct imaging. IEEE
Sensors Journal, 21(14):16301–16314, 2021.
[13] Yiqiang Chen, Xin Qin, Jindong Wang, Chaohui Yu,
and Wen Gao. Fedhealth: A federated transfer learning
framework for wearable healthcare. IEEE Intelligent
Systems, 35(4):83–93, 2020.
[14] Dianbo Sui, Yubo Chen, Jun Zhao, Yantao Jia, Yuantao
Xie, and Weijian Sun. Feded: Federated learning via
ensemble distillation for medical relation extraction.
In Proceedings of the 2020 conference on empirical
methods in natural language processing (EMNLP),
pages 2118–2128, 2020.
[15] Ittai Dayan, Holger R Roth, Aoxiao Zhong, Ahmed
Harouni, Amilcare Gentili, Anas Z Abidin, Andrew
Liu, Anthony Beardsworth Costa, Bradford J Wood,
Chien-Sung Tsai, et al. Federated learning for predicting
clinical outcomes in patients with covid-19. Nature
medicine, 27(10):1735–1743, 2021.
[16] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du,
Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A
robustly optimized bert pretraining approach. arXiv
preprint arXiv:1907.11692, 2019.
– 157 –