selection of publications I have contributed to, either peer-reviewed or preprints.
2024
arXiv
Segmentation of Non-Small Cell Lung Carcinomas: Introducing DRU-Net and Multi-Lens Distortion
Soroush Oskouei, Marit Valla,
André Pedersen, Erik Smistad, Vibeke Grotnes Dale, Maren Høibø, Sissel Gyrid Freim Wahl, Mats Dehli Haugum, Thomas Langø, Maria Paula Ramnefjell, Lars Andreas Akslen, Gabriel Kiss, and Hanne Sorger
Considering the increased workload in pathology laboratories today, automated tools such as artificial intelligence models can help pathologists with their tasks and ease the workload. In this paper, we are proposing a segmentation model (DRU-Net) that can provide a delineation of human non-small cell lung carcinomas and an augmentation method that can improve classification results. The proposed model is a fused combination of truncated pre-trained DenseNet201 and ResNet101V2 as a patch-wise classifier followed by a lightweight U-Net as a refinement model. We have used two datasets (Norwegian Lung Cancer Biobank and Haukeland University Hospital lung cancer cohort) to create our proposed model. The DRU-Net model achieves an average of 0.91 Dice similarity coefficient. The proposed spatial augmentation method (multi-lens distortion) improved the network performance by 3%. Our findings show that choosing image patches that specifically include regions of interest leads to better results for the patch-wise classifier compared to other sampling methods. The qualitative analysis showed that the DRU-Net model is generally successful in detecting the tumor. On the test set, some of the cases showed areas of false positive and false negative segmentation in the periphery, particularly in tumors with inflammatory and reactive changes.
Neu.Onc.Adv
Growth dynamics of untreated meningiomas
Per Sveino Strand, Kathrine Jørgensen Wågø,
André Pedersen, Ingerid Reinertsen, Olivia Nälsund, Asgeir Store Jakola, David Bouget, Sayied Abdol Mohieb Hosainey, Lisa Millgård Sagberg, Johanna Vanel, and Ole Solheim
Knowledge about meningioma growth characteristics is needed for developing biologically rational follow-up routines. In this study of untreated meningiomas followed with repeated MRIs, we studied growth dynamics and explored potential factors associated with tumor growth.In a single-center cohort study, we included 235 adult patients with a radiologically suspected intracranial meningioma and at least three MRI scans during follow-up. Tumors were segmented using an automatic algorithm from contrast enhanced T1-series, and if needed manually corrected. Potential meningioma growth curves were statistically compared; linear, exponential, linear radial, or Gompertzian. Factors associated with growth were explored.In 235 patients, 1394 MRI scans were carried out in the median five-year observational period. Of the models tested, a Gompertzian growth curve best described growth dynamics of meningiomas on group level. 59 \% of the tumors grew, 27 \% remained stable, and 14 \% shrunk. Only 13 patients (5 \%) underwent surgery during the observational period and were excluded after surgery. Tumor size at time of diagnosis, multifocality, and length of follow-up were associated with tumor growth, whereas age, sex, presence of peritumoral edema or hyperintense T2-signal were not significant factors.Untreated meningiomas follow a Gompertzian growth curve, indicating that increasing and potentially doubling of subsequent follow-up intervals between MRIs seems biologically reasonable, instead of fixed time intervals. Tumor size at diagnosis is the strongest predictor of future growth, indicating a potential for longer follow up intervals for smaller tumors. Although most untreated meningiomas grow, few require surgery.
2023
arXiv
Immunohistochemistry guided segmentation of benign epithelial cells, in situ lesions, and invasive epithelial cells in breast cancer slides
Maren Høibø,
André Pedersen, Vibeke Grotnes Dale, Sissel Marie Berget, Borgny Ytterhus, Cecilia Lindskog, Elisabeth Wik, Lars A. Akslen, Ingerid Reinertsen, Erik Smistad, and Marit Valla
Digital pathology enables automatic analysis of histopathological sections using artificial intelligence (AI). Automatic evaluation could improve diagnostic efficiency and help find associations between morphological features and clinical outcome. For development of such prediction models, identifying invasive epithelial cells, and separating these from benign epithelial cells and in situ lesions would be the first step. In this study, we aimed to develop an AI model for segmentation of epithelial cells in sections from breast cancer. We generated epithelial ground truth masks by restaining hematoxylin and eosin (HE) sections with cytokeratin (CK) AE1/AE3, and by pathologists’ annotations. HE/CK image pairs were used to train a convolutional neural network, and data augmentation was used to make the model more robust. Tissue microarrays (TMAs) from 839 patients, and whole slide images from two patients were used for training and evaluation of the models. The sections were derived from four cohorts of breast cancer patients. TMAs from 21 patients from a fifth cohort was used as a second test set. In quantitative evaluation, a mean Dice score of 0.70, 0.79, and 0.75 for invasive epithelial cells, benign epithelial cells, and in situ lesions, respectively, were achieved. In qualitative scoring (0-5) by pathologists, results were best for all epithelium and invasive epithelium, with scores of 4.7 and 4.4. Scores for benign epithelium and in situ lesions were 3.7 and 2.0. The proposed model segmented epithelial cells in HE stained breast cancer slides well, but further work is needed for accurate division between the classes. Immunohistochemistry, together with pathologists’ annotations, enabled the creation of accurate ground truths. The model is made freely available in FastPathology and the code is available at this https://github.com/AICAN-Research/breast-epithelium-segmentation
arXiv
AeroPath: An airway segmentation benchmark dataset with challenging pathology
Karen-Helene Støverud, David Bouget,
André Pedersen, Håkon Olav Leira, Thomas Langø, and Erlend Fagertun Hofstad
To improve the prognosis of patients suffering from pulmonary diseases, such as lung cancer, early diagnosis and treatment are crucial. The analysis of CT images is invaluable for diagnosis, whereas high quality segmentation of the airway tree are required for intervention planning and live guidance during bronchoscopy. Recently, the Multi-domain Airway Tree Modeling (ATM’22) challenge released a large dataset, both enabling training of deep-learning based models and bringing substantial improvement of the state-of-the-art for the airway segmentation task. However, the ATM’22 dataset includes few patients with severe pathologies affecting the airway tree anatomy. In this study, we introduce a new public benchmark dataset (AeroPath), consisting of 27 CT images from patients with pathologies ranging from emphysema to large tumors, with corresponding trachea and bronchi annotations. Second, we present a multiscale fusion design for automatic airway segmentation. Models were trained on the ATM’22 dataset, tested on the AeroPath dataset, and further evaluated against competitive open-source methods. The same performance metrics as used in the ATM’22 challenge were used to benchmark the different considered approaches. Lastly, an open web application is developed, to easily test the proposed model on new data. The results demonstrated that our proposed architecture predicted topologically correct segmentations for all the patients included in the AeroPath dataset. The proposed method is robust and able to handle various anomalies, down to at least the fifth airway generation. In addition, the AeroPath dataset, featuring patients with challenging pathologies, will contribute to development of new state-of-the-art methods. The AeroPath dataset and the web application are made openly available.
Sci.Rep
Segmentation of glioblastomas in early post-operative multi-modal MRI with deep neural networks
Ragnhild Holden Helland, Alexandros Ferles,
André Pedersen, Ivar Kommers, Hilko Ardon, Frederik Barkhof, Lorenzo Bello, Mitchel Berger, Tora Dunås, Marco Conti Nibali, Julia Furtner, Shawn Hervey-Jumper, Albert Idema, Barbara Kiesel, Rishi Tewari, Emmanuel Mandonnet, Domenique Müller, Pierre Robe, Marco Rossi, and David Bouget
Extent of resection after surgery is one of the main prognostic factors for patients diagnosed with glioblastoma. To achieve this, accurate segmentation and classification of residual tumor from post-operative MR images is essential. The current standard method for estimating it is subject to high inter- and intra-rater variability, and an automated method for segmentation of residual tumor in early post-operative MRI could lead to a more accurate estimation of extent of resection. In this study, two state-of-the-art neural network architectures for pre-operative segmentation were trained for the task. The models were extensively validated on a multicenter dataset with nearly 1000 patients, from 12 hospitals in Europe and the United States. The best performance achieved was a 61% Dice score, and the best classification performance was about 80% balanced accuracy, with a demonstrated ability to generalize across hospitals. In addition, the segmentation performance of the best models was on par with human expert raters. The predicted segmentations can be used to accurately classify the patients into those with residual tumor, and those with gross total resection.
Sci.Rep
Raidionics: an open software for pre-and postoperative central nervous system tumor segmentation and standardized reporting
David Bouget, Demah Alsinan, Valeria Gaitan, Ragnhild Holden Helland,
André Pedersen, Ole Solheim, and Ingerid Reinertsen
For patients suffering from central nervous system tumors, prognosis estimation, treatment decisions, and postoperative assessments are made from the analysis of a set of magnetic resonance (MR) scans. Currently, the lack of open tools for standardized and automatic tumor segmentation and generation of clinical reports, incorporating relevant tumor characteristics, leads to potential risks from inherent decisions’ subjectivity. To tackle this problem, the proposed Raidionics open-source software has been developed, offering both a user-friendly graphical user interface and stable processing backend. The software includes preoperative segmentation models for each of the most common tumor types (i.e., glioblastomas, lower grade gliomas, meningiomas, and metastases), together with one early postoperative glioblastoma segmentation model. Preoperative segmentation performances were quite homogeneous across the four different brain tumor types, with an average Dice around 85% and patient-wise recall and precision around 95%. Postoperatively, performances were lower with an average Dice of 41%. Overall, the generation of a standardized clinical report, including the tumor segmentation and features computation, requires about ten minutes on a regular laptop. The proposed Raidionics software is the first open solution enabling an easy use of state-of-the-art segmentation models for all major tumor types, including preoperative and postsurgical standardized reports.
PLOS ONE
Learning deep abdominal CT registration through adaptive loss weighting and synthetic data generation
Javier Frutos,
André Pedersen, Egidijus Pelanis, David Bouget, Shanmugapriya Survarachakan, Thomas Langø, Ole-Jakob Elle, and Frank Lindseth
Purpose: This study aims to explore training strategies to improve convolutional neural network-based image-to-image deformable registration for abdominal imaging. Methods: Different training strategies, loss functions, and transfer learning schemes were considered. Furthermore, an augmentation layer which generates artificial training image pairs on-the-fly was proposed, in addition to a loss layer that enables dynamic loss weighting. Results: Guiding registration using segmentations in the training step proved beneficial for deep-learning-based image registration. Finetuning the pretrained model from the brain MRI dataset to the abdominal CT dataset further improved performance on the latter application, removing the need for a large dataset to yield satisfactory performance. Dynamic loss weighting also marginally improved performance, all without impacting inference runtime. Conclusion: Using simple concepts, we improved the performance of a commonly used deep image registration architecture, VoxelMorph. In future work, our framework, DDMR, should be validated on different datasets to further assess its value.
2022
Fron.Med
H2G-Net: A multi-resolution refinement approach for segmentation of breast cancer region in gigapixel histopathological images
André Pedersen, Erik Smistad, Tor V. Rise, Vibeke G. Dale, Henrik S. Pettersen, Tor-Arne S. Nordmo, David Bouget, Ingerid Reinertsen, and Marit Valla
Over the past decades, histopathological cancer diagnostics has become more complex, and the increasing number of biopsies is a challenge for most pathology laboratories. Thus, development of automatic methods for evaluation of histopathological cancer sections would be of value. In this study, we used 624 whole slide images (WSIs) of breast cancer from a Norwegian cohort. We propose a cascaded convolutional neural network design, called H2G-Net, for segmentation of breast cancer region from gigapixel histopathological images. The design involves a detection stage using a patch-wise method, and a refinement stage using a convolutional autoencoder. To validate the design, we conducted an ablation study to assess the impact of selected components in the pipeline on tumor segmentation. Guiding segmentation, using hierarchical sampling and deep heatmap refinement, proved to be beneficial when segmenting the histopathological images. We found a significant improvement when using a refinement network for post-processing the generated tumor segmentation heatmaps. The overall best design achieved a Dice similarity coefficient of 0.933±0.069 on an independent test set of 90 WSIs. The design outperformed single-resolution approaches, such as cluster-guided, patch-wise high-resolution classification using MobileNetV2 (0.872±0.092) and a low-resolution U-Net (0.874±0.128). In addition, the design performed consistently on WSIs across all histological grades and segmentation on a representative × 400 WSI took 58 s, using only the central processing unit. The findings demonstrate the potential of utilizing a refinement network to improve patch-wise predictions. The solution is efficient and does not require overlapping patch inference or ensembling. Furthermore, we showed that deep neural networks can be trained using a random sampling scheme that balances on multiple different labels simultaneously, without the need of storing patches on disk. Future work should involve more efficient patch generation and sampling, as well as improved clustering.
PLOS ONE
Teacher-student approach for lung tumor segmentation from mixed-supervised datasets
Vemund Fredriksen, Svein Ole M. Sevle,
André Pedersen, Thomas Langø, Gabriel Kiss, and Frank Lindseth
Purpose Cancer is among the leading causes of death in the developed world, and lung cancer is the most lethal type. Early detection is crucial for better prognosis, but can be resource intensive to achieve. Automating tasks such as lung tumor localization and segmentation in radiological images can free valuable time for radiologists and other clinical personnel. Convolutional neural networks may be suited for such tasks, but require substantial amounts of labeled data to train. Obtaining labeled data is a challenge, especially in the medical domain. Methods This paper investigates the use of a teacher-student design to utilize datasets with different types of supervision to train an automatic model performing pulmonary tumor segmentation on computed tomography images. The framework consists of two models: the student that performs end-to-end automatic tumor segmentation and the teacher that supplies the student additional pseudo-annotated data during training. Results Using only a small proportion of semantically labeled data and a large number of bounding box annotated data, we achieved competitive performance using a teacher-student design. Models trained on larger amounts of semantic annotations did not perform better than those trained on teacher-annotated data. Our model trained on a small number of semantically labeled data achieved a mean dice similarity coefficient of 71.0 on the MSD Lung dataset. Conclusions Our results demonstrate the potential of utilizing teacher-student designs to reduce the annotation load, as less supervised annotation schemes may be performed, without any real degradation in segmentation accuracy.
Comp.M.Bio
Mediastinal lymph nodes segmentation using 3D convolutional neural network ensembles and anatomical priors guiding
David Bouget,
André Pedersen, Johanna Vanel, Haakon O. Leira, and Thomas Langø
Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization Mar 2022
As lung cancer evolves, the presence of potentially malignant lymph nodes must be assessed to properly estimate disease progression and select the best treatment strategy. A method for accurate and automatic segmentation is hence decisive for quantitatively describing lymph nodes. In this study, the use of 3D convolutional neural networks, either through slab-wise schemes or the leveraging of downsampled entire volumes, is investigated. As lymph nodes have similar attenuation values to nearby anatomical structures, we use the knowledge of other organs as prior information to guide the segmentation. To assess the performances, a 5-fold cross-validation strategy was followed over a dataset of 120 contrast-enhanced CT volumes. For the 1178 lymph nodes with a short-axis diameter ≥10 mm, our best-performing approach reached a patient-wise recall of 92%, a false positive per patient ratio of 5 and a segmentation overlap of 80.5%. Fusing a slab-wise and a full volume approach within an ensemble scheme generated the best performances. The anatomical priors guiding strategy is promising, yet a larger set than four organs appears needed to generate an optimal benefit. A larger dataset is also mandatory given the wide range of expressions a lymph node can exhibit (i.e. shape, location and attenuation).
Book chapter
Artificial Intelligence in Studies of Malignant Tumours
André Pedersen, Ingerid Reinertsen, Emiel Janssen, and Marit Valla
In book: Biomarkers of the Tumor Microenvironment Jan 2022
With the introduction of digital pathology and artificial intelligence (AI)-based methods, we may be facing a new era in cancer diagnostics and prognostication. AI can assist pathologists in labour-intensive tasks and potentially discover new features currently not detected and characterized in routine diagnostics. As entire digital histopathological sections can be included in the analysis, AI can be used both to study the epithelial component of a tumour and the microenvironment. Most state-of-the-art AI approaches used for image analysis utilize multi-step pipelines. AI-based methods have shown promising results in a wide range of clinically relevant tasks. It is, however, important to be aware of some challenges and limitations, such as the lack of generalizability of AI-based models, and the importance of understanding the reason behind a conclusion.
Fron.Neu
Preoperative Brain Tumor Imaging: Models and Software for Segmentation and Standardized Reporting
David Bouget,
André Pedersen, Asgeir S. Jakola, Vasileios Kavouridis, Kyrre E. Emblem, Roelant S. Eijgelaar, Ivar Kommers, Hilko Ardon, Frederik Barkhof, Lorenzo Bello, Mitchel S. Berger, Marco Conti Nibali, Julia Furtner, Shawn Hervey-Jumper, Albert J. S. Idema, Barbara Kiesel, Alfred Kloet, Emmanuel Mandonnet, Domenique M. J. Müller, Pierre A. Robe, Marco Rossi, Tommaso Sciortino, Wimar A. Brink, Michiel Wagemakers, Georg Widhalm, Marnix G. Witte, Aeilko H. Zwinderman, Philip C. De Witt Hamer, Ole Solheim, and Ingerid Reinertsen
For patients suffering from brain tumor, prognosis estimation and treatment decisions are made by a multidisciplinary team based on a set of preoperative MR scans. Currently, the lack of standardized and automatic methods for tumor detection and generation of clinical reports, incorporating a wide range of tumor characteristics, represents a major hurdle. In this study, we investigate the most occurring brain tumor types: glioblastomas, lower grade gliomas, meningiomas, and metastases, through four cohorts of up to 4,000 patients. Tumor segmentation models were trained using the AGU-Net architecture with different preprocessing steps and protocols. Segmentation performances were assessed in-depth using a wide-range of voxel and patient-wise metrics covering volume, distance, and probabilistic aspects. Finally, two software solutions have been developed, enabling an easy use of the trained models and standardized generation of clinical reports: Raidionics and Raidionics-Slicer. Segmentation performances were quite homogeneous across the four different brain tumor types, with an average true positive Dice ranging between 80 and 90%, patient-wise recall between 88 and 98%, and patient-wise precision around 95%. In conjunction to Dice, the identified most relevant other metrics were the relative absolute volume difference, the variation of information, and the Hausdorff, Mahalanobis, and object average symmetric surface distances. With our Raidionics software, running on a desktop computer with CPU support, tumor segmentation can be performed in 16–54 s depending on the dimensions of the MRI volume. For the generation of a standardized clinical report, including the tumor segmentation and features computation, 5–15 min are necessary. All trained models have been made open-access together with the source code for both software solutions and validation metrics computation. In the future, a method to convert results from a set of metrics into a final single score would be highly desirable for easier ranking across trained models. In addition, an automatic classification of the brain tumor type would be necessary to replace manual user input. Finally, the inclusion of post-operative segmentation in both software solutions will be key for generating complete post-operative standardized clinical reports.
Fron.Med
Code-Free Development and Deployment of Deep Segmentation Models for Digital Pathology
Henrik S. Pettersen, Ilya Belevich, Elin S. Røyset, Melanie R. Simpson, Erik Smistad, Eija Jokitalo, Ingerid Reinertsen, Ingunn Bakke, and André Pedersen
Application of deep learning on histopathological whole slide images (WSIs) holds promise of improving diagnostic efficiency and reproducibility but is largely dependent on the ability to write computer code or purchase commercial solutions. We present a code-free pipeline utilizing free-to-use, open-source software (QuPath, DeepMIB, and FastPathology) for creating and deploying deep learning-based segmentation models for computational pathology. We demonstrate the pipeline on a use case of separating epithelium from stroma in colonic mucosa. A dataset of 251 annotated WSIs, comprising 140 hematoxylin-eosin (HE)-stained and 111 CD3 immunostained colon biopsy WSIs, were developed through active learning using the pipeline. On a hold-out test set of 36 HE and 21 CD3-stained WSIs a mean intersection over union score of 95.5 and 95.3% was achieved on epithelium segmentation. We demonstrate pathologist-level segmentation accuracy and clinical acceptable runtime performance and show that pathologists without programming experience can create near state-of-the-art segmentation solutions for histopathological WSIs using only free-to-use software. The study further demonstrates the strength of open-source solutions in its ability to create generalizable, open pipelines, of which trained models and predictions can seamlessly be exported in open formats and thereby used in external solutions. All scripts, trained models, a video tutorial, and the full dataset of 251 WSIs with 31 k epithelium annotations are made openly available at https://github.com/andreped/NoCodeSeg to accelerate research in the field.
2021
IEEE-B
Preliminary Processing and Analysis of an Adverse Event Dataset for Detecting Sepsis-Related Events
Melissa Yan, Lise Husby Høvik,
André Pedersen, Lise Tuset Gustad, and Øystein Nytrø
2021 IEEE International Conference on Bioinformatics and Biomedicine Dec 2021
Adverse event (AE) reports contain notes detailing procedural and guideline deviations, and unwanted incidents that can bring harm to patients. Available datasets mainly focus on vigilance or post-market surveillance of adverse drug reactions or
medical device failures. The lack of clinical-related AE datasets makes it challenging to study healthcare-related AEs. AEs affect 10% of hospitalized patients, and almost half are preventable. Having an AE dataset can assist in identifying possible patient
safety interventions and performing quality surveillance to lower AE rates. The free-text notes can provide insight into the cause of incidents and lead to better patient care. The objective of this study is to introduce a Norwegian AE dataset and present preliminary processing and analysis for sepsis-related events, specifically peripheral intravenous catheter-related bloodstream infections. Therefore, the methods focus on performing a domain analysis to prepare and better understand the data through screening, generating synthetic free-text notes, and annotating
notes.
Cancers
Glioblastoma Surgery Imaging-Reporting and Data System: Validation and Performance of the Automated Segmentation Task
David Bouget, Roelant Eijgelaar,
André Pedersen, Ivar Kommers, Hilko Ardon, Frederik Barkhof, Lorenzo Bello, Mitchel Berger, Marco Conti Nibali, Julia Furtner, Even Fyllingen, Shawn Hervey-Jumper, Albert Idema, Barbara Kiesel, Alfred Kloet, Emmanuel Mandonnet, Domenique Müller, Pierre Robe, Marco Rossi, and Ole Solheim
For patients with presumed glioblastoma, essential tumor characteristics are determined from preoperative MR images to optimize the treatment strategy. This procedure is time-consuming and subjective, if performed by crude eyeballing or manually. The standardized GSI-RADS aims to provide neurosurgeons with automatic tumor segmentations to extract tumor features rapidly and objectively. In this study, we improved automatic tumor segmentation and compared the agreement with manual raters, describe the technical details of the different components of GSI-RADS, and determined their speed. Two recent neural network architectures were considered for the segmentation task: nnU-Net and AGU-Net. Two preprocessing schemes were introduced to investigate the tradeoff between performance and processing speed. A summarized description of the tumor feature extraction and standardized reporting process is included. The trained architectures for automatic segmentation and the code for computing the standardized report are distributed as open-source and as open-access software. Validation studies were performed on a dataset of 1594 gadolinium-enhanced T1-weighted MRI volumes from 13 hospitals and 293 T1-weighted MRI volumes from the BraTS challenge. The glioblastoma tumor core segmentation reached a Dice score slightly below 90%, a patientwise F1-score close to 99%, and a 95th percentile Hausdorff distance slightly below 4.0 mm on average with either architecture and the heavy preprocessing scheme. A patient MRI volume can be segmented in less than one minute, and a standardized report can be generated in up to five minutes. The proposed GSI-RADS software showed robust performance on a large collection of MRI volumes from various hospitals and generated results within a reasonable runtime.
Fron.Rad
Meningioma Segmentation in T1-Weighted MRI Leveraging Global Context and Attention Mechanisms
David Bouget,
André Pedersen, Sayied Hosainey, Ole Solheim, and Ingerid Reinertsen
Purpose: Meningiomas are the most common type of primary brain tumor, accounting for 30% of all brain tumors. A substantial number of these tumors are never surgically removed but rather monitored over time. Automatic and precise meningioma segmentation is, therefore, beneficial to enable reliable growth estimation and patient-specific treatment planning. Methods: In this study, we propose the inclusion of attention mechanisms on top of a U-Net architecture used as backbone: (i) Attention-gated U-Net (AGUNet) and (ii) Dual Attention U-Net (DAUNet), using a three-dimensional (3D) magnetic resonance imaging (MRI) volume as input. Attention has the potential to leverage the global context and identify features’ relationships across the entire volume. To limit spatial resolution degradation and loss of detail inherent to encoder–decoder architectures, we studied the impact of multi-scale input and deep supervision components. The proposed architectures are trainable end-to-end and each concept can be seamlessly disabled for ablation studies. Results: The validation studies were performed using a five-fold cross-validation over 600 T1-weighted MRI volumes from St. Olavs Hospital, Trondheim University Hospital, Norway. Models were evaluated based on segmentation, detection, and speed performances, and results are reported patient-wise after averaging across all folds. For the best-performing architecture, an average Dice score of 81.6% was reached for an F1-score of 95.6%. With an almost perfect precision of 98%, meningiomas smaller than 3 ml were occasionally missed hence reaching an overall recall of 93%. Conclusion: Leveraging global context from a 3D MRI volume provided the best performances, even if the native volume resolution could not be processed directly due to current GPU memory limitations. Overall, near-perfect detection was achieved for meningiomas larger than 3 ml, which is relevant for clinical use. In the future, the use of multi-scale designs and refinement networks should be further investigated. A larger number of cases with meningiomas below 3 ml might also be needed to improve the performance for the smallest tumors.
Cancers
Glioblastoma Surgery Imaging—Reporting and Data System: Standardized Reporting of Tumor Volume, Location, and Resectability Based on Automated Segmentations
Ivar Kommers, David Bouget,
André Pedersen, Roelant Eijgelaar, Hilko Ardon, Frederik Barkhof, Lorenzo Bello, Mitchel Berger, Marco Conti Nibali, Julia Furtner, Even Fyllingen, Shawn Hervey-Jumper, Albert Idema, Barbara Kiesel, Alfred Kloet, Emmanuel Mandonnet, Domenique Müller, Pierre Robe, Marco Rossi, and Philip De Witt Hamer
Treatment decisions for patients with presumed glioblastoma are based on tumor characteristics available from a preoperative MR scan. Tumor characteristics, including volume, location, and resectability, are often estimated or manually delineated. This process is time consuming and subjective. Hence, comparison across cohorts, trials, or registries are subject to assessment bias. In this study, we propose a standardized Glioblastoma Surgery Imaging Reporting and Data System (GSI-RADS) based on an automated method of tumor segmentation that provides standard reports on tumor features that are potentially relevant for glioblastoma surgery. As clinical validation, we determine the agreement in extracted tumor features between the automated method and the current standard of manual segmentations from routine clinical MR scans before treatment. In an observational consecutive cohort of 1596 adult patients with a first time surgery of a glioblastoma from 13 institutions, we segmented gadolinium-enhanced tumor parts both by a human rater and by an automated algorithm. Tumor features were extracted from segmentations of both methods and compared to assess differences, concordance, and equivalence. The laterality, contralateral infiltration, and the laterality indices were in excellent agreement. The native and normalized tumor volumes had excellent agreement, consistency, and equivalence. Multifocality, but not the number of foci, had good agreement and equivalence. The location profiles of cortical and subcortical structures were in excellent agreement. The expected residual tumor volumes and resectability indices had excellent agreement, consistency, and equivalence. Tumor probability maps were in good agreement. In conclusion, automated segmentations are in excellent agreement with manual segmentations and practically equivalent regarding tumor features that are potentially relevant for neurosurgical purposes. Standard GSI-RADS reports can be generated by open access software.
IEEE-A
FastPathology: An Open-Source Platform for Deep Learning-Based Research and Decision Support in Digital Pathology
André Pedersen, Marit Valla, Anna Bofin, Javier Frutos, Ingerid Reinertsen, and Erik Smistad
Deep convolutional neural networks (CNNs) are the current state-of-the-art for digital analysis of histopathological images. The large size of whole-slide microscopy images (WSIs) requires advanced memory handling to read, display and process these images. There are several open-source platforms for working with WSIs, but few support deployment of CNN models. These applications use third-party solutions for inference, making them less user-friendly and unsuitable for high-performance image analysis. To make deployment of CNNs user-friendly and feasible on low-end machines, we have developed a new platform, FastPathology, using the FAST framework and C++. It minimizes memory usage for reading and processing WSIs, deployment of CNN models, and real-time interactive visualization of results. Runtime experiments were conducted on four different use cases, using different architectures, inference engines, hardware configurations and operating systems. Memory usage for reading, visualizing, zooming and panning a WSI were measured, using FastPathology and three existing platforms. FastPathology performed similarly in terms of memory to the other C++-based application, while using considerably less than the two Java-based platforms. The choice of neural network model, inference engine, hardware and processors influenced runtime considerably. Thus, FastPathology includes all steps needed for efficient visualization and processing of WSIs in a single application, including inference of CNNs with real-time display of the results. Source code, binary releases, video demonstrations and test data can be found online on GitHub at https://github.com/SINTEFMedtek/FAST-Pathology/.
JMI
Fast meningioma segmentation in T1-weighted magnetic resonance imaging volumes using a lightweight 3D deep learning architecture
David Bouget,
André Pedersen, Sayied Hosainey, Johanna Vanel, Ole Solheim, and Ingerid Reinertsen
Purpose: Automatic and consistent meningioma segmentation in T1-weighted magnetic resonance (MR) imaging volumes and corresponding volumetric assessment is of use for diagnosis, treatment planning, and tumor growth evaluation. We optimized the segmentation and processing speed performances using a large number of both surgically treated meningiomas and untreated meningiomas followed at the outpatient clinic. Approach: We studied two different three-dimensional (3D) neural network architectures: (i) a simple encoder-decoder similar to a 3D U-Net, and (ii) a lightweight multi-scale architecture [Pulmonary Lobe Segmentation Network (PLS-Net)]. In addition, we studied the impact of different training schemes. For the validation studies, we used 698 T1-weighted MR volumes from St. Olav University Hospital, Trondheim, Norway. The models were evaluated in terms of detection accuracy, segmentation accuracy, and training/inference speed. Results: While both architectures reached a similar Dice score of 70% on average, the PLS-Net was more accurate with an F1-score of up to 88%. The highest accuracy was achieved for the largest meningiomas. Speed-wise, the PLS-Net architecture tended to converge in about 50 h while 130 h were necessary for U-Net. Inference with PLS-Net takes less than a second on GPU and about 15 s on CPU. Conclusions: Overall, with the use of mixed precision training, it was possible to train competitive segmentation models in a relatively short amount of time using the lightweight PLS-Net architecture. In the future, the focus should be brought toward the segmentation of small meningiomas (<2 ml) to improve clinical relevance for automatic and early diagnosis and speed of growth estimates.
UMB
Sonopermeation Enhances Uptake and Therapeutic Effect of Free and Encapsulated Cabazitaxel
Sofie Snipstad, Yrr Mørch, Einar Sulheim, Andreas Aslund, Catharina Davies, Rune Hansen, Sigrid Berg, and André Pedersen
Delivery of drugs and nanomedicines to tumors is often heterogeneous and insufficient and, thus, of limited efficacy. Microbubbles in combination with ultrasound have been found to improve delivery to tumors, enhancing accumulation and penetration. We used a subcutaneous prostate cancer xenograft model in mice to investigate the effect of free and nanoparticle-encapsulated cabazitaxel in combination with ultrasound and microbubbles with a lipid shell or a shell of nanoparticles. Sonopermeation reduced tumor growth and prolonged survival (26%-100%), whether the free drug was co-injected with lipid-shelled microbubbles or the nanoformulation was co-injected with lipid-shelled or nanoparticle-shelled microbubbles. Coherently with the improved therapeutic response, we found enhanced uptake of nanoparticles directly after ultrasound treatment that lasted several weeks (2.3x-15.8x increase). Neither cavitation dose nor total accumulation of nanoparticles could explain the variation within treatment groups, emphasizing the need for a better understanding of the tumor biology and mechanisms involved in ultrasound-mediated treatment.
2019
IEEE-A
High Performance Neural Network Inference, Streaming, and Visualization of Medical Images Using FAST
Deep convolutional neural networks have quickly become the standard for medical image analysis. Although there are many frameworks focusing on training neural networks, there are few that focus on high performance inference and visualization of medical images. Neural network inference requires an inference engine (IE), and there are currently several IEs available including Intel’s OpenVINO, NVIDIA’s TensorRT, and Google’s TensorFlow which supports multiple backends, including NVIDIA’s cuDNN, AMD’s ROCm and Intel’s MKL-DNN. These IEs only work on specific processors and have completely different application programming interfaces (APIs). In this paper, we presents methods for extending FAST, an open-source high performance framework for medical imaging, to use any IE with a common programming interface. Thereby making it easier for users to deploy and test their neural networks on different processors. This article provides an overview of current IEs and how they can be combined with existing software such as FAST. The methods are demonstrated and evaluated on three performance demanding medical use cases: real-time ultrasound image segmentation, computed tomography (CT) volume segmentation, and patch-wise classification of whole slide microscopy images. Runtime performance was measured on the three use cases with several different IEs and processors. This revealed that the choice of IE and processor can affect performance of medical neural network image analysis considerably. In the most extreme case of processing 171 ultrasound frames, the difference between the fastest and slowest configuration were half a second vs. 24 seconds. For volume processing, using the CPU or the GPU, showed a difference of 2 vs. 53 seconds, and for processing an whole slide microscopy image, the difference was 81 seconds vs. almost 16 minutes. Source code, binary releases and test data can be found online on GitHub at https://github.com/smistad/FAST/.