Medical Imaging Seminars – 2009

Wednesday, 9 September, 4:30pm, in Physics 126.

Automatic Summarization of Changes in Biological Image Sequences using Algorithmic Information Theory

Andy Cohen, UWM-EE

This talk will describe a broadly applicable methodology based on algorithmic information theory and algorithmic statistics for generating a concise and meaningful summary of the changes occurring within and across image sequences. The methodology requires only the availability of object extraction and tracking algorithms for a given application. The method was evaluated on sets of image sequences from seven different applications domains from cell and tissue biology using a single implementation. This represents terabytes of data from hundreds of image sequences. For some of these data sets we reproduced and extended previously published results. In other cases we discovered previously unknown behavioral differences that corresponded to biologically significant differences in populations. These behaviors are subtly different, and difficult or impossible for a human observer to discern visually in time-lapse image recordings.

It will also describe a semi-supervised extension of this methodology that was able to predict the fate outcomes of cultured retinal progenitor cells before division by analyzing their subtle shape and motion patterns as recorded by time-lapse phase-contrast microscopy. These predictions can be used to identify homogenous populations of same-fate cells, enabling diverse functional genetic questions to be addressed. A complete cross validation required 50 minutes on an IBM Blue Gene supercomputer; a highly optimized version of our prediction algorithm enables segmentation, tracking and cell fate prediction for 40 cells simultaneously on a standard PC within the five-minute per frame microscope acquisition time.


Wednesday, 7 October, 4:30pm, in Physics 126.

Electromagnetic functional imaging at the speed of brain

Sylvain Baillet, MCW Neurology

Functional imaging techniques have certainly changed the way neuroscientists and clinicians are looking at the brain. In that respect, positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) have largely contributed to mapping the brain in action.

With increasing numbers of sensors integrated in whole-head arrays, electro and magnetoencephalography (EEG, MEG) have now matured as true functional imaging modalities. With a temporal resolution in the millisecond range, their contribution to the exploration of brain functions has unprecedented potentials. Imaging the neural sources of EEG and MEG scalp signals is essentially a modelling problem. We will discuss how the integration of structural information from MRI contributes to leverage the basic indeterminacy in the modelling of MEG/EEG neural generators.

The fine time resolution of MEG/EEG imaging can take many faces while studying brain functions. We will therefore illustrate how basic evoked brain responses can be complemented by the identification and localization of neural oscillatory components and interactions in specific frequency domains.

From a more technical standpoint, this talk will introduce the basic signal and image processing apparatus being used for MEG/EEG source imaging and also discuss more recent developments dedicated to the identification of propagating patterns of cortical currents using optical flow techniques. A variety of experimental data examples will be provided for illustration.


Wednesday, 21 October, 4:30pm, in Physics 126.

Compressed Sensing in MRI

Lei Ying, UWM-EE

Traditional approach to sampling signals or images follows the Shannon sampling theorem, which says that the sampling rate must be at least the Nyquist rate. In practice, this principle underlies nearly all signal acquisition protocols used in electronic systems. Compressed Sensing (CS) is a new paradigm for the acquisition of sparse or compressible signals arising in a multitude of applications. The CS theory asserts that certain signals and images can be recovered from samples acquired at a rate close to their intrinsic information rate which is well below the Nyquist rate. For this to be possible, several conditions need to be satisfied regarding the signals of interest, the measurement schemes, and the reconstruction algorithms. In this talk, I will explain the fundamental conditions for CS to be successful. Although the CS theory itself is highly mathematical, the talk will focus on concepts rather than theorems. I will give a few examples on emerging applications with an emphasis on the work that my group has been doing in magnetic resonance imaging.


Wednesday, 4 November, 4:30pm, in Physics 126.

Fluorescence spectroscopy, microscopy and imaging to assess biological tissues health conditions

Masha Ranji, UWM-EE

The main focus of this talk is on biophotonics, especially fluorescence spectroscopy, microscopy and imaging as noninvasive tissue diagnostic tools. We employ optical techniques, using fiber optic probes to acquire an “optical biopsy” of tissues health conditions. Fluorescence-based techniques are widely used in biomedical applications as diagnostic or therapeutic tools for early detection of different types of cancer and heart diseases. At present, the gold standard to detect tissue abnormality is histopathology of the excisional biopsies. This procedure requires physical removal of tissue, staining and specimen handling. This gold standard suffers from sampling error and offline data analysis. Fluorescence techniques provide diagnosis of tissue status in vivo, in real time and without tissue removal.

Heart disease is the leading cause of death in the United States. A heart attack (myocardial infarction) produces a wave of dying cells (apoptotic) that will propagate around the infarction area and eventually cause heart failure. Fluorescence spectroscopy and imaging techniques make it possible to monitor the biochemical changes and metabolic levels in tissue in real time due to such diseases.   

We also use time-lapse fluorescence microscopy to image, track and apply cytometric tools to cardiomyocytes differentiated from stem cells in vitro over long period of time. Tracking stem cells in vitro shed a light on what they differentiate into, the migration maps, statistical behavior, association with other types of differentiated cells, and finally their survival rate. As potential therapeutics, the transplantation of stem cells into damaged myocardium has shown to improve heart function in small animal models.


Wednesday, 18 November, 4:30pm, in Physics 126.

Single and multi-modal image analysis for Alzheimer's disease diagnosis applications

Vikas Singh, UW-Madison Biostat and Med Informatics

In this talk, I will discuss some of our ongoing work on applying machine learning techniques to identify and analyze patterns associated with various stages of Alzheimer's disease, using structural MR and PET image acquisitions of subjects. A key issue in this application is that image data are typically provided as image volumes and the datasets have relatively small sample sizes. I will discuss strategies and experimental results of building upon traditional group analysis by introducing simple spatial priors within boosting, in an effort to determine a classifier to differentiate two clinically different populations. For surface and shape data (cortical surfaces), I will present preliminary work on constructing kernels for cortical surface thickness using the notion of topological persistence. Finally, I will present our ongoing work on systematic combination of information from multiple imaging modalities, clinical biomarkers, and demographic data (provided by a large multi-center NIH initiative) in an effort to identify pre-symptomatic characteristics of patients at risk of Alzheimer's disease (and other forms of neurological disorders). These multi-modal techniques are based on adaption and extensions of Multi-kernel learning methods.

This is joint work with Moo K. Chung, Chris Hinrichs, Sterling C. Johnson, and Deepti Pachauri.


Wednesday, 2 December, 4:30pm, in Physics 126.

Statistical Iterative Reconstruction for X-Ray Computed Tomography

Bruno De Man, GE Global Research Center

In the last decade researchers have made a lot of progress in model-based iterative reconstruction for X-ray computed tomography. This technique has the potential to dramatically reduce the radiation dose to the patient, among other benefits. The starting point is Bayesian estimation framework, combined with a detailed forward model of the CT scan acquisition, an image prior model and a statistical noise model. The major remaining challenges include computation time and making the algorithms robust enough to deliver consistent image quality for day-to-day clinical use. While it would bechallenging to completely replace filtered backprojection, a reconstruction technique with almost four decades of history in CT, by a completelynew reconstruction technique like model-based iterative reconstruction, a lot of progress has been made to show the potentialclinical impact of model-based iterative reconstruction on CT, and it is only a matter of time before it will become routinelyavailable for clinical application.