Highlighting linked views has been a powerful exploratory technique for many years. Early systems supported highlighting in scatter plots, where each element displays one row of data.
An extension of this technique, proportional highlighting, displays more information in graphs where elements display more than one row of data. JMP was one of the first products to support proportional highlighting, in 1989.
Figure 1. Proportional Highlighting in Histograms, JMP Version 1.0The visualization expert Stephen Few argues strongly for the power of this technique ("It’s hard to imagine a situation when proportional highlighting isn’t significantly superior," he wrote.).
For JMP Live, we wanted to support proportional highlighting consistently in all graphs because a consistent user interface is easier to learn. When users learn the technique in one graph, they can reuse their knowledge in other graphs.
We also wanted to support both coarse- and fine-grained estimation. To achieve this, we combined proportional highlighting with hover tips. Proportional highlighting enables rough estimates, while hover tips can display actual numbers.
Figure 2. Proportional Highlighting with Hover Tip. The bar displays the Mean weight of males. Eleven rows are selected and highlighted.
When choosing this design, we believed that both proportional highlighting and hover tips are easy to learn. But how can we know this? The only way to be confident is to test.
A Usability Test
JMP customers helped us test this user interface over the web. Each test displayed six pairs of graphs, in six blocks of three trials each, for a total of 18 trials. For each pair of graphs, test participants were asked to estimate the percentage of rows highlighted for specified values of a variable.
For each trial of each pair of graphs, the visual query was changed slightly each time, by altering the values or the variable or both. Order was randomized for both the six blocks and the three trials within each block.
Figure 3. Six Pairs of Interactive Graphs.
Twenty-five participants took the test. Several test responses showed unusually fast answer times of less than two seconds, perhaps due to errant mouse clicks. One test participant had consistently fast times, with a third of their answers made in less than one second. These data were excluded from the analysis.
It was hoped that test participants could learn our framework and improve their use of it over time. This was tested by timing their answers to the questions and scoring the correctness of their responses.
The chosen answer was required to be between 10% and 90%, and an exact multiple of 10%. The correct answer might not be an exact multiple, so scores were offset by the difference. This enabled us to award a perfect score for the best possible answer.
The maximum distance from the correct answer varied with the question. If the best possible answer was 50%, the user could be off by at most 40%; but if the best possible answer was 10%, the user might be off by 80%. For this reason, scores were normalized by scaling by the maximum distance from the correct answer.
Using this formula, zero is a perfect score, and plus or minus 1 the worst possible score:
Looking at all the data, plotted with smoothing splines, we see that both answer times and scores improved across the eighteen trials. Everyone had to stare for a few seconds at the first screen, but after that, they began improving.
Figure 4. Answer Time in Seconds Across All Trials
The scores show a negative bias. This may be explained by test participants reading left to right, and tending to stop on the first good answer.
Figure 5. Score Across All Trials. Zero is a perfect score.
Both spline curves are rather "bumpy." This periodicity can be explained by the different types of graphs. On every third trial, test participants were presented with a new pair of graphs.
For the means across the three trials, ANOVA showed strong evidence of improvement in answer time (p < .0001). Scores improved less dramatically, but still showed a significant improvement between the first trial and the last (p < 0.5).
Figure 6. Mean Log( Answer Time ) Across Three Trials.
Figure 7. Mean Score Across Three Trials.
Of the 24 test participants, 22 used the hover tips. Hover tips appear after a 300ms delay, as recommended in user interface style guidelines. So in a short test like this one, it is reasonable that two people did not discover them. It is likely that the hover tips would be discovered with longer exposure to the product.
Participants who did not use hover tips were limited to "eyeballing" the graphs to estimate their answers. Those who used hover tips were able to see the exact numbers. However, no strong relationship was found between the test participant's scores and the time spent using hover tips. The time spent depended heavily on the types of graphs.
Types of Graphs
As expected, test results depended on which types of graphs were tested. Testing only six pairs of graphs is not sufficient to draw strong conclusions, but both box plots and maps were among the more difficult graphs to estimate. This is reasonable to expect, because these graphs are among the most complex.
Least squares regressions showed answer time was strongly influenced by both graph type and trial (p=0.0004 and p<.0001 respectively). Score was strongly influenced by graph type (p<.0001), and less so by trial (p=0.0871).
Figure 8. Regression of Log( Answer Time )
Figure 9. Regression of Score
Highlighting linked views is a powerful exploratory technique. Proportional highlighting extends this technique, enabling users to explore more of their data.
Test participants learned this user interface, easily becoming productive enough to answer questions about the graphs. Both speed and accuracy of participants' answers improved over time.
Proportional highlighting is available in both JMP Live and standalone Interactive HTML in JMP. I've posted some examples on JMP Public. We're happy to bring this technique to our customers, to enable more and faster insights into their data.
Figure 10. Proportional Highlighting of Movie Inventory Dashboard (https://public.jmp.com/packages/Proportional-Highlighting/js-p/nqjXxvt37nkqhjT5HcjGH)
Dr. Arati Mejdal (@arati_mejdal) and Ms. Stephanie Mencia conducted our test. Dr. Joseph Morgan (@joseph_morgan) and Dr. Ryan Lekivetz (@ryan_lekivetz) advised on its design. Dr. Caleb King (@calking) gave essential help analyzing the results. One great blessing of working in JMP is, when you need bright people to help you, they are easy to find.