About Us

We are interested in explainable deep learning and interactive visualization.  Latent space visualizations can help improve the explainability of AI models by providing a visual representation of the hidden or intermediate features learned by the model. These visualizations can make it easier to understand the relationships between input data points and the model's internal structure, which can be especially useful for complex models like deep learning networks.

Some of our Projects

We are also interested in architectural patterns for medical and scientific visualization that help organize software components and their interactions to efficiently manage, process, and visualize complex medical or scientific data.