Scientific Visualization of Very Large Data Sets

Introduction

The challenges for scientific visualization on massive data sets are two-fold:

Selected Current Research Projects

Exploration at the Exascale

Power constraints from exascale computing (10^18 floating point operations per second) will preclude the traditional visualization workflow, where simulations save data to disk and analysts later explore this data with visualization tools. Instead, we will need to embed routines into the simulation code to massively reduce the data. However, this reduction must be intelligently carried out: if too aggressive, the analyst will not feel confident in the integrity of the data and will and disregard any resulting analyses. In short, this research tackles how to balance the tension between reduction and integrity. The specific research to inform this tension involves areas such as uncertainty visualization, wavelet compression, massive concurrency, and many more. More information about this project can be found here.

Efficient Parallel Algorithms

Many visualization algorithms are difficult to parallelize and even more difficult to make run efficiently in parallel. Our group has recently published new advances for stream surfaces (below, left) and techniques for dealing with complex inputs, specifically adaptive mesh refinement (below, right).

Heterogeneous Algorithms

As compute nodes increasingly have accelerators (such as GPUs), we have to explore the best way to map visualization algorithms onto them and we also have to evaluate their efficacy. Further, although these accelerators provide increased computational power, this power is balanced by increased latencies. In a distributed memory setting -- where accelerators must coordinate their activities via a network -- the optimal way to design an algorithm is rarely the naive one.

Creating Insight from Large Scientific Data Sets

Our group has extensive contacts with end users and performs application-oriented research to help them better understand data. This often means developing new techniques or applying techniques in new ways, such as the Finite-Time Lyapunov Exponent (FTLE)-based analysis of oil dispersion in the Gulf of Mexico (below, left), nuclear reactor design (below, middle-top), identification and analysis of features in turbulent flow (below, middle-bottom), and explosions of stars (below, right). More information about the analysis of oil dispersion in the Gulf of Mexico can be found here.

Collaborations

We collaborate with a number of individuals and research programs:

Faculty

Hank Childs, Assistant Professor

Selected Publications

Here are a few recent publications from the group: