We are interested in explainable deep learning and interactive visualization. Latent space visualizations can help improve the explainability of AI models by providing a visual representation of the hidden or intermediate features learned by the model. These visualizations can make it easier to understand the relationships between input data points and the model's internal structure, which can be especially useful for complex models like deep learning networks.
Cxr8Explorer utilizes the Anat-0-Mixer control, which is an interactive latent space visualization of patient geometries
WarpTPS uses a landmark-based latent space to interact with anatomical deformations
pheonixrt is a demonstration application for a novel inverse planning algorithm for radiotherapy, based on the relative entropy between a target DVH and the actual DVH. The target DVH represents the desired distribution of radiation doses within the patient's body, aiming to maximize the dose delivered to the tumor while minimizing the dose to surrounding healthy tissues. The actual DVH represents the doses achieved by the treatment planning system, which seeks to optimize the radiation delivery parameters.
VSIM_OGL, PaintToolMvvm, and MprIsocurveMvvm are examples of various architectural approaches for medical imaging visualization and interaction
ALGT library for the algorithms implemented in VSIM_OGL demonstrates using Prolog for verification of algorithms.
SRODecoderRing is a demonstration of a learned mapping for between matrices and 6 DoF corrections.
Eevorg browser shows a succession of generated bulbs, and allows the user to select the next bulb
theWheel is an older interactive graph visualization tool for knowledge navigation
Petrobot is a hexapod robot that engages pets in interactive play
We are also interested in architectural patterns for medical and scientific visualization that help organize software components and their interactions to efficiently manage, process, and visualize complex medical or scientific data.