Assoc. Prof. Chisako Muramatsu
Shiga University, Japan
Chisako Muramatsu received the B.S. degree in Health Sciences from Kanazawa University in 2001 and Ph.D. degree in Medical Physics from the University of Chicago in 2008. She became a visiting associate professor in Department of Intelligent Image Information, Graduate School of Medicine, Gifu University in 2012 and in Faculty of Engineering, Gifu University in 2017. She is an associate professor in Faculty of Data Science, Shiga University since 2019. She serves as program committee members of Computer-Aided Diagnosis Conference of SPIE Medical Imaging and International Workshop on Breast Imaging.
Speech Title: "Toward Prediagnosis and Automatic Filing of Dental Panoramic Images"
Abstract: Many dental clinics are operated by a single dentist and a few dental hygienists. It is burdensome to check every single tooth and fill out dental reports in a limited time. Our goal is to have an AI look at not only the teeth but also surroundings to pick up any findings in oral region and prefile dental report for improving diagnostic efficiency. In this presentation, automatic detection and classification method of tooth types and conditions on dental panoramic radiographs is introduced. In addition, possible prescreening of osteoporosis through dental checkups is discussed.
Asst. Prof. Jiaqing Liu
Ritsumeikan University, Japan
Jiaqing Liu received the B.E. degree from Northeastern University, Shenyang, China, in 2016, and the M.E. and D.E. degrees from Ritsumeikan University, Kyoto, Japan, in 2018 and 2021, respectively. From 2020 to 2021, he was a JSPS Research Fellowship for Young Science. From October 2021 to March 2022, he was a Specially Appointed Assistant Professor with the Department of Intelligent Media, ISIR, Osaka University, Osaka, Japan. He is currently an Assistant Professor with the College of Information Science and Engineering, Ritsumeikan University. His research interests include pattern recognition, image processing, and machine learning.
Speech Title: "Multimodal Deep Learning in Depression Estimation"
Abstract: Deep learning has been successfully applied in many research fields, such as computer vision, speech recognition and natural language processing. Most of them are focused on single modality. On the other hand, multimodal information is more useful for practical applications. Multimodal deep learning has got a lot of attention and becomes an important issue in the field of artificial intelligence. Compared with traditional single-modal deep learning, there are following challenges in multimodal deep learning: development of multimodal dataset; multimodal representation; multimodal alignment; multimodal translation and multimodal co-learning. The propose of this talk is to introduce efficient and accurate multimodal deep learning methods and apply them to depression estimation.
Dr. Dongxu Yang
University of Texas Southwestern Medical Center (UTSW), USA
Dr. Yang obtained B.S. in Applied Physics and Ph.D. in Physics Electronics from University of Science and Technology of China (USTC), China in 2012 and 2018, respectively, where he mainly designed and developed low-noise low-power electronics including Application-Specific-Integrated-Circuits (ASICs) for Charge-coupled Device (CCD) cameras and high-speed data transceivers in high energy particle experiments. After graduating from USTC, Dr. Yang underwent postdoctoral training in the Medical Physics and Engineering Division, Department of Radiation Oncology at University of Texas Southwestern Medical Center (UTSW), Dallas, Texas, USA. Currently, Dr. Yang serves as a Senior Research Associate at UTSW. His research interests cover the Positron Emission Tomography (PET) development and applications including detectors, readout electronics, data acquisition, image processing and image-guided adaptive radiation therapy.
Speech Title: "Proton Intra-beam Range Verification with Low-dose, Short-acquisition, Online PET Imaging"
Abstract: It is highly desirable to measure the proton beam-range (BR) with a fraction of therapeutic beams (intra-beam) within a single treatment session that will enable, if necessary, an adaptive delivery of the rest beams based on the measured range-shift to achieve the planned dose distribution. The success of such an approach will be a paradigm-shift to establish a low-dose, intra-beam range measurement and range-guided adaptive beam delivery to substantially improve the therapy certainty and accuracy. This presentation reported the development of a brain PET dedicated for such proposed on-line proton-induced positron activity-range (AR) measurement, evaluated its capability and performance with offline studies, and conducted proton online beam experiments with a head-neck phantom. The PET consists of 20 detector panels in a polygon configuration with ~3mm spatial resolution, ~32cm diameter and 6.4cm axial field-of-view, and two detector panels can be removed for beam passing. Offline studies with point 22Na or 18-FDG radioactive source showed that the PET has great linearity of range-shift measurement. For online experiments, Lucite sheets with different thickness were inserted to alter beam pathlength. The results demonstrated that the PET can obtained < 2.0mm AR shift deviation compared with physical beam shifts, and the AR shift can be compensated with adaptive plans to a deviation less than 1.0mm.