Precomputed Radiative Heat Transport for Efficient Thermal Simulation
Architectural design and urban planning are complex design tasks. Predicting the thermal impact of design choices at interactive rates enhances the ability of designers to improve energy efficiency and avoid problematic heat islands while maintaining design quality. We show how to use and adapt methods from computer graphics to efficiently simulate heat transfer via thermal radiation, thereby improving user guidance in the early design phase of large-scale construction projects and helping to increase energy efficiency and outdoor comfort. Our method combines a hardware-accelerated photon tracing approach with a carefully selected finite element discretization, inspired by precomputed radiance transfer. This combination allows us to precompute a radiative transport operator, which we then use to rapidly solve either steady-state or transient heat transport throughout the entire scene. Our formulation integrates time-dependent solar irradiation data without requiring changes in the transport operator, allowing us to quickly analyze many different scenarios such as common weather patterns, monthly or yearly averages, or transient simulations spanning multiple days or weeks. We show how our approach can be used for interactive design workflows such as city planning via fast feedback in the early design phase.
3D Generative Model Latent Disentanglement via Local Eigenprojection
Designing realistic digital humans is extremely complex. Most data-driven generative models used to simplify the creation of their underlying geometric shape do not offer control over the generation of local shape attributes. In this paper, we overcome this limitation by introducing a novel loss function grounded in spectral geometry and applicable to different neural-network-based generative models of 3D head and body meshes. Encouraging the latent variables of mesh variational autoencoders (VAEs) or generative adversarial networks (GANs) to follow the local eigenprojections of identity attributes, we improve latent disentanglement and properly decouple the attribute creation. Experimental results show that our local eigenprojection disentangled (LED) models not only offer improved disentanglement with respect to the state-of-the-art, but also maintain good generation capabilities with training times comparable to the vanilla implementations of the models. Our code and pre-trained models are available at github.com/simofoti/LocalEigenprojDisentangled.
Visual Parameter Space Exploration in Time and Space
Computational models, such as simulations, are central to a wide range of fields in science and industry. Those models take input parameters and produce some output. To fully exploit their utility, relations between parameters and outputs must be understood. These include, for example, which parameter setting produces the best result (optimization) or which ranges of parameter settings produce a wide variety of results (sensitivity). Such tasks are often difficult to achieve for various reasons, for example, the size of the parameter space, and supported with visual analytics. In this paper, we survey visual parameter space exploration (VPSE) systems involving spatial and temporal data. We focus on interactive visualizations and user interfaces. Through thematic analysis of the surveyed papers, we identify common workflow steps and approaches to support them. We also identify topics for future work that will help enable VPSE on a greater variety of computational models.
Are We There Yet? A Roadmap of Network Visualization from Surveys to Task Taxonomies
Networks are abstract and ubiquitous data structures, defined as a set of data points and relationships between them. Network visualization provides meaningful representations of these data, supporting researchers in understanding the connections, gathering insights, and detecting and identifying unexpected patterns. Research in this field is focusing on increasingly challenging problems, such as visualizing dynamic, complex, multivariate, and geospatial networked data. This ever-growing, and widely varied, body of research led to several surveys being published, each covering one or more disciplines of network visualization. Despite this effort, the variety and complexity of this research represents an obstacle when surveying the domain and building a comprehensive overview of the literature. Furthermore, there exists a lack of clarification and uniformity between the terminology used in each of the surveys, which requires further effort when mapping and categorizing the plethora of different visualization techniques and approaches. In this paper, we aim at providing researchers and practitioners alike with a "roadmap" detailing the current research trends in the field of network visualization. We design our contribution as a meta-survey where we discuss, summarize, and categorize recent surveys and task taxonomies published in the context of network visualization. We identify more and less saturated disciplines of research and consolidate the terminology used in the surveyed literature. We also survey the available task taxonomies, providing a comprehensive analysis of their varying support to each network visualization discipline and by establishing and discussing a classification for the individual tasks. With this combined analysis of surveys and task taxonomies, we provide an overarching structure of the field, from which we extrapolate the current state of research and promising directions for future work.
DASS Good: Explainable Data Mining of Spatial Cohort Data
Developing applicable clinical machine learning models is a difficult task when the data includes spatial information, for example, radiation dose distributions across adjacent organs at risk. We describe the co-design of a modeling system, DASS, to support the hybrid human-machine development and validation of predictive models for estimating long-term toxicities related to radiotherapy doses in head and neck cancer patients. Developed in collaboration with domain experts in oncology and data mining, DASS incorporates human-in-the-loop visual steering, spatial data, and explainable AI to augment domain knowledge with automatic data mining. We demonstrate DASS with the development of two practical clinical stratification models and report feedback from domain experts. Finally, we describe the design lessons learned from this collaborative experience.
ParaDime: A Framework for Parametric Dimensionality Reduction
ParaDime is a framework for parametric dimensionality reduction (DR). In parametric DR, neural networks are trained to embed high-dimensional data items in a low-dimensional space while minimizing an objective function. ParaDime builds on the idea that the objective functions of several modern DR techniques result from transformed inter-item relationships. It provides a common interface for specifying these relations and transformations and for defining how they are used within the losses that govern the training process. Through this interface, ParaDime unifies parametric versions of DR techniques such as metric MDS, t-SNE, and UMAP. It allows users to fully customize all aspects of the DR process. We show how this ease of customization makes ParaDime suitable for experimenting with interesting techniques such as hybrid classification/embedding models and supervised DR. This way, ParaDime opens up new possibilities for visualizing high-dimensional data.
Doom or Deliciousness: Challenges and Opportunities for Visualization in the Age of Generative Models
Generative text-to-image models (as exemplified by DALL-E, MidJourney, and Stable Diffusion) have recently made enormous technological leaps, demonstrating impressive results in many graphical domains-from logo design to digital painting to photographic composition. However, the quality of these results has led to existential crises in some fields of art, leading to questions about the role of human agency in the production of meaning in a graphical context. Such issues are central to visualization, and while these generative models have yet to be widely applied in visualization, it seems only a matter of time until their integration is manifest. Seeking to circumvent similar ponderous dilemmas, we attempt to understand the roles that generative models might play across visualization. We do so by constructing a framework that characterizes what these technologies offer at various stages of the visualization workflow, augmented and analyzed through semi-structured interviews with 21 experts from related domains. Through this work, we map the space of opportunities and risks that might arise in this intersection, identifying doomsday prophecies and delicious low-hanging fruits that are ripe for research.
Visual Exploration of Financial Data with Incremental Domain Knowledge
Modelling the dynamics of a growing financial environment is a complex task that requires domain knowledge, expertise and access to heterogeneous information types. Such information can stem from several sources at different scales, complicating the task of forming a holistic impression of the financial landscape, especially in terms of the economical relationships between firms. Bringing this scattered information into a common context is, therefore, an essential step in the process of obtaining meaningful insights about the state of an economy. In this paper, we present , a Visual Analytics (VA) approach for exploring financial data across different scales, from individual firms up to nation-wide aggregate data. Our solution is coupled with a pipeline for the generation of firm-to-firm financial transaction networks, fusing information about individual firms with sector-to-sector transaction data and domain knowledge on macroscopic aspects of the economy. Each network can be created to have multiple instances to compare different scenarios. We collaborated with experts from finance and economy during the development of our VA solution, and evaluated our approach with seven domain experts across industry and academia through a qualitative insight-based evaluation. The analysis shows how enables the generation of insights, and how the incorporation of transaction models assists users in their exploration of a national economy.
Shape-Guided Mixed Metro Map Layout
Metro or transit maps, are schematic representations of transit networks to facilitate effective route-finding. These maps are often advertised on a web page or pamphlet highlighting routes from source to destination stations. To visually support such route-finding, designers often distort the layout by embedding symbolic shapes (e.g., circular routes) in order to guide readers' attention (e.g., Moscow map and Japan railway map). However, manually producing such maps is labor-intensive and the effect of shapes remains unclear. In this paper, we propose an approach to generalize such mixed metro maps that take user-defined shapes as an input. In this mixed design, lines that are used to approximate the shapes are arranged symbolically, while the remaining lines follow classical layout convention. A three-step algorithm, including (1) detecting and selecting routes for shape approximation, (2) shape and layout deformation, and (3) aligning lines on a grid, is integrated to guarantee good visual quality. Our contribution lies in the definition of the mixed metro map problem and the formulation of design criteria so that the problem can be resolved systematically using the optimization paradigm. Finally, we evaluate the performance of our approach and perform a user study to test if the embedded shapes are recognizable or reduce the map quality.
Non-Isometric Shape Matching via Functional Maps on Landmark-Adapted Bases
We propose a principled approach for non-isometric landmark-preserving non-rigid shape matching. Our method is based on the functional map framework, but rather than promoting isometries we focus on near-conformal maps that preserve landmarks exactly. We achieve this, first, by introducing a novel landmark-adapted basis using an intrinsic Dirichlet-Steklov eigenproblem. Second, we establish the functional decomposition of conformal maps expressed in this basis. Finally, we formulate a conformally-invariant energy that promotes high-quality landmark-preserving maps, and show how it can be optimized via a variant of the recently proposed ZoomOut method that we extend to our setting. Our method is descriptor-free, efficient and robust to significant mesh variability. We evaluate our approach on a range of benchmark datasets and demonstrate state-of-the-art performance on non-isometric benchmarks and near state-of-the-art performance on isometric ones.
Smooth Interpolating Curves with Local Control and Monotone Alternating Curvature
We propose a method for the construction of a planar curve based on piecewise clothoids and straight lines that intuitively interpolates a given sequence of control points. Our method has several desirable properties that are not simultaneously fulfilled by previous approaches: Our interpolating curves are C continuous, their computation does not rely on global optimization and has local support, enabling fast evaluation for interactive modeling. Further, the sign of the curvature at control points is consistent with the control polygon; the curvature attains its extrema at control points and is monotone between consecutive control points of opposite curvature signs. In addition, we can ensure that the curve has self-intersections only when the control polygon also self-intersects between the same control points. For more fine-grained control, the user can specify the desired curvature and tangent values at certain control points, though it is not required by our method. Our local optimization can lead to discontinuity w.r.t. the locations of control points, although the problem is limited by its locality. We demonstrate the utility of our approach in generating various curves and provide a comparison with the state of the art.
Hex Me If You Can
HexMe consists of 189 tetrahedral meshes with tagged features and a workflow to generate them. The primary purpose of HexMe meshes is to enable consistent and practically meaningful evaluation of hexahedral meshing algorithms and related techniques, specifically regarding the correct meshing of specified feature points, curves, and surfaces. The tetrahedral meshes have been generated with Gmsh, starting from 63 computer-aided design (CAD) models from various databases. To highlight and label the diverse and challenging aspects of hexahedral mesh generation, the CAD models are classified into three categories: simple, nasty, and industrial. For each CAD model, we provide three kinds of tetrahedral meshes (uniform, curvature-adapted, and box-embedded). The mesh generation pipeline is defined with the help of Snakemake, a modern workflow management system, which allows us to specify a fully automated, extensible, and sustainable workflow. It is possible to download the whole dataset or select individual meshes by browsing the online catalog. The HexMe dataset is built with evolution in mind and prepared for future developments. A public GitHub repository hosts the HexMe workflow, where external contributions and future releases are possible and encouraged. We demonstrate the value of HexMe by exploring the robustness limitations of state-of-the-art frame-field-based hexahedral meshing algorithm. Only for 19 of 189 tagged tetrahedral inputs all feature entities are meshed correctly, while the average success rates are 70.9% / 48.5% / 34.6% for feature points/curves/surfaces.
Visual Parameter Selection for Spatial Blind Source Separation
Analysis of spatial multivariate data, i.e., measurements at irregularly-spaced locations, is a challenging topic in visualization and statistics alike. Such data are inteGral to many domains, e.g., indicators of valuable minerals are measured for mine prospecting. Popular analysis methods, like PCA, often by design do not account for the spatial nature of the data. Thus they, together with their spatial variants, must be employed very carefully. Clearly, it is preferable to use methods that were specifically designed for such data, like spatial blind source separation (SBSS). However, SBSS requires two tuning parameters, which are themselves complex spatial objects. Setting these parameters involves navigating two large and interdependent parameter spaces, while also taking into account prior knowledge of the physical reality represented by the data. To support analysts in this process, we developed a visual analytics prototype. We evaluated it with experts in visualization, SBSS, and geochemistry. Our evaluations show that our interactive prototype allows to define complex and realistic parameter settings efficiently, which was so far impractical. Settings identified by a non-expert led to remarkable and surprising insights for a domain expert. Therefore, this paper presents important first steps to enable the use of a promising analysis method for spatial multivariate data.
Life cycle of SARS-CoV-2: from sketch to visualization in atomistic resolution
Guide Me in Analysis: A Framework for Guidance Designers
Guidance is an emerging topic in the field of visual analytics. Guidance can support users in pursuing their analytical goals more efficiently and help in making the analysis successful. However, it is not clear how guidance approaches should be designed and what specific factors should be considered for effective support. In this paper, we approach this problem from the perspective of guidance designers. We present a framework comprising requirements and a set of specific phases designers should go through when designing guidance for visual analytics. We relate this process with a set of quality criteria we aim to support with our framework, that are necessary for obtaining a suitable and effective guidance solution. To demonstrate the practical usability of our methodology, we apply our framework to the design of guidance in three analysis scenarios and a design walk-through session. Moreover, we list the emerging challenges and report how the framework can be used to design guidance solutions that mitigate these issues.
NEVA: Visual Analytics to Identify Fraudulent Networks
Trust-ability, reputation, security and quality are the main concerns for public and private financial institutions. To detect fraudulent behaviour, several techniques are applied pursuing different goals. For well-defined problems, analytical methods are applicable to examine the history of customer transactions. However, fraudulent behaviour is constantly changing, which results in ill-defined problems. Furthermore, analysing the behaviour of individual customers is not sufficient to detect more complex structures such as networks of fraudulent actors. We propose NEVA (Network dEtection with Visual Analytics), a Visual Analytics exploration environment to support the analysis of customer networks in order to reduce false-negative and false-positive alarms of frauds. Multiple coordinated views allow for exploring complex relations and dependencies of the data. A guidance-enriched component for network pattern generation, detection and filtering support exploring and analysing the relationships of nodes on different levels of complexity. In six expert interviews, we illustrate the applicability and usability of NEVA.
Peax: Interactive Visual Pattern Search in Sequential Data Using Unsupervised Deep Representation Learning
We present Peax, a novel feature-based technique for interactive visual pattern search in sequential data, like time series or data mapped to a genome sequence. Visually searching for patterns by similarity is often challenging because of the large search space, the visual complexity of patterns, and the user's perception of similarity. For example, in genomics, researchers try to link patterns in multivariate sequential data to cellular or pathogenic processes, but a lack of ground truth and high variance makes automatic pattern detection unreliable. We have developed a convolutional autoencoder for unsupervised representation learning of regions in sequential data that can capture more visual details of complex patterns compared to existing similarity measures. Using this learned representation as features of the sequential data, our accompanying visual query system enables interactive feedback-driven adjustments of the pattern search to adapt to the users' perceived similarity. Using an active learning sampling strategy, Peax collects user-generated binary relevance feedback. This feedback is used to train a model for binary classification, to ultimately find other regions that exhibit patterns similar to the search target. We demonstrate Peax's features through a case study in genomics and report on a user study with eight domain experts to assess the usability and usefulness of Peax. Moreover, we evaluate the effectiveness of the learned feature representation for visual similarity search in two additional user studies. We find that our models retrieve significantly more similar patterns than other commonly used techniques.
CPU Ray Tracing of Tree-Based Adaptive Mesh Refinement Data
Adaptive mesh refinement (AMR) techniques allow for representing a simulation's computation domain in an adaptive fashion. Although these techniques have found widespread adoption in high-performance computing simulations, visualizing their data output interactively and without cracks or artifacts remains challenging. In this paper, we present an efficient solution for direct volume rendering and hybrid implicit isosurface ray tracing of tree-based AMR (TB-AMR) data. We propose a novel reconstruction strategy, Generalized Trilinear Interpolation (GTI), to interpolate across AMR level boundaries without cracks or discontinuities in the surface normal. We employ a general sparse octree structure supporting a wide range of AMR data, and use it to accelerate volume rendering, hybrid implicit isosurface rendering and value queries. We demonstrate that our approach achieves artifact-free isosurface and volume rendering and provides higher quality output images compared to existing methods at interactive rendering rates.
A Survey on Transit Map Layout - from Design, Machine, and Human Perspectives
Transit maps are designed to present information for using public transportation systems, such as urban railways. Creating a transit map is a time-consuming process, which requires iterative information selection, layout design, and usability validation, and thus maps cannot easily be customised or updated frequently. To improve this, scientists investigate fully- or semi-automatic techniques in order to produce high quality transit maps using computers and further examine their corresponding usability. Nonetheless, the quality gap between manually-drawn maps and machine-generated maps is still large. To elaborate the current research status, this state-of-the-art report provides an overview of the transit map generation process, primarily from Design, Machine, and Human perspectives. A systematic categorisation is introduced to describe the design pipeline, and an extensive analysis of perspectives is conducted to support the proposed taxonomy. We conclude this survey with a discussion on the current research status, open challenges, and future directions.
Cuttlefish: Color Mapping for Dynamic Multi-Scale Visualizations
Visualizations of hierarchical data can often be explored interactively. For example, in geographic visualization, there are continents, which can be subdivided into countries, states, counties and cities. Similarly, in models of viruses or bacteria at the highest level are the compartments, and below that are macromolecules, secondary structures (such as α-helices), amino-acids, and on the finest level atoms. Distinguishing between items can be assisted through the use of color at all levels. However, currently, there are no hierarchical and adaptive color mapping techniques for very large multi-scale visualizations that can be explored interactively. We present a novel, multi-scale, color-mapping technique for adaptively adjusting the color scheme to the current view and scale. Color is treated as a resource and is smoothly redistributed. The distribution adjusts to the scale of the currently observed detail and maximizes the color range utilization given current viewing requirements. Thus, we ensure that the user is able to distinguish items on any level, even if the color is not constant for a particular feature. The coloring technique is demonstrated for a political map and a mesoscale structural model of HIV. The technique has been tested by users with expertise in structural biology and was overall well received.
Tasks, Techniques, and Tools for Genomic Data Visualization
Genomic data visualization is essential for interpretation and hypothesis generation as well as a valuable aid in communicating discoveries. Visual tools bridge the gap between algorithmic approaches and the cognitive skills of investigators. Addressing this need has become crucial in genomics, as biomedical research is increasingly data-driven and many studies lack well-defined hypotheses. A key challenge in data-driven research is to discover unexpected patterns and to formulate hypotheses in an unbiased manner in vast amounts of genomic and other associated data. Over the past two decades, this has driven the development of numerous data visualization techniques and tools for visualizing genomic data. Based on a comprehensive literature survey, we propose taxonomies for data, visualization, and tasks involved in genomic data visualization. Furthermore, we provide a comprehensive review of published genomic visualization tools in the context of the proposed taxonomies.