• Y. Yang, B. Jenny, T. Dwyer, K. Marriott, H. Chen, and M. Cordeil. Maps and Globes in Virtual Reality. Computer Graphics Forum, 37(3):to appear, June 2018.

    EuroVis 2018

    Details are available at: Yalong's Visualisation Gallery;

    Abstract: This paper explores different ways to render world-wide geographic maps in virtual reality (VR). We compare: (a) a 3D exocentric globe, where the user’s viewpoint is outside the globe; (b) a flat map (rendered to a plane in VR); (c) an egocentric 3D globe, with the viewpoint inside the globe; and (d) a curved map, created by projecting the map onto a section of a sphere which curves around the user. In all four visualisations the geographic centre can be smoothly adjusted with a standard handheld VR controller and the user, through a head-tracked headset, can physically move around the visualisation. For distance comparison exocentric globe is more accurate than egocentric globe and flat map. For area comparison more time is required with exocentric and egocentric globes than with flat and curved maps. For direction estimation, the exocentric globe is more accurate and faster than the other visual presentations. Our study participants had a weak preference for the exocentric globe. Generally the curved map had benefits over the flat map. In almost all cases the egocentric globe was found to be the least effective visualisation. Overall, our results provide support for the use of exocentric globes for geographic visualisation in mixed-reality.

  • Y. Yang, T. Dwyer, S. Goodwin and K. Marriott, Many-to-Many Geographically-Embedded Flow Visualisation: An Evaluation, IEEE Transactions on Visualization and Computer Graphics, 23(1): 411–420, Jan. 2017.

    IEEE InfoVis 2016 Honorable Mention for Best Paper Award

    Demo and samples are available at: Yalong's Visualisation Gallery;

    Abstract: Showing flows of people and resources between multiple geographic locations is a challenging visualisation problem. We conducted two quantitative user studies to evaluate different visual representations for such dense many-to-many flows. In our first study we compared a bundled node-link flow map representation and OD Maps with a new visualisation we call MapTrix. Like OD Maps, MapTrix overcomes the clutter associated with a traditional flow map while providing geographic embedding that is missing in standard OD matrix representations. We found that OD Maps and MapTrix had similar performance while bundled node-link flow map representations did not scale at all well. Our second study compared participant performance with OD Maps and MapTrix on larger data sets. Again performance was remarkably similar.

  • J. Wang, Y. Yang, J. Wei, and J. Zhang, Continuous ultrasound based tongue movement video synthesis from speech, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1716–1720, 2016.

    Pre-print; Final version; Result demo (with audio); Recording demo (with audio)

    Abstract: The movement of tongue plays an important role in pronunciation. Visualizing the movement of tongue can improve speech intelligibility and also helps learning a second language. However, hardly any research has been investigated for this topic. In this paper, a framework to synthesize continuous ultrasound tongue movement video from speech is presented. Two different mapping methods are introduced as the most important parts of the framework. The objective evaluation and subjective opinions show that the Gaussian Mixture Model (GMM) based method has a better result for synthesizing static image and Vector Quantization (VQ) based method produces more stable continuous video. Meanwhile, the participants of evaluation state that the results of both methods are visual-understandable.

  • Y. Yang, K. Zhang, J. Wang, and Q. V. Nguyen, Cabinet Tree: an orthogonal enclosure approach to visualizing and exploring big data, Journal Of Big Data, Springer, 2(1): 1–18, Jul. 2015.

    Demo and samples are available at: Yalong's Visualisation Gallery;

    Abstract: Treemaps are well-known for visualizing hierarchical data. Most related approaches have been focused on layout algorithms and paid little attention to other display properties and interactions. Furthermore, the structural information in conventional Treemaps is too implicit for viewers to perceive. This paper presents Cabinet Tree, an approach that: i) draws branches explicitly to show relational structures, ii) adapts a space-optimized layout for leaves and maximizes the space utilization, iii) uses coloring and labeling strategies to clearly reveal patterns and contrast different attributes intuitively. We also apply the continuous node selection and detail window techniques to support user interaction with different levels of the hierarchies. Our quantitative evaluations demonstrate that Cabinet Tree achieves good scalability for increased resolutions and big datasets.

  • Y. Yang, N. Dou, S. Zhao, Z. Yang, K. Zhang, and Q. V. Nguyen, Visualizing large hierarchies with drawer trees, In Proceedings of the 29th Annual ACM Symposium on Applied Computing (SAC '14), pp. 951-956. ACM, 2014.

    Samples are available at: Yalong's Visualisation Gallery;

    Abstract: Enclosure partitioning approaches, such as Treemaps, have proved their effectiveness in visualizing large hierarchical structures within a compact and limited display area. Most of the Treemaps techniques do not use node-links to show the structural relations. This paper presents a new tree visualization approach known as Drawer-Tree that can be used to present the structure, organization and interrelation of big data. By utilizing the display space with traditional node-link visualization, we have developed a novel method for visualizing tree structures with high scalability. The name "drawer" is a metaphor that helps people understand the visualization.