The graph is very appealing at some level and comes with well-done animated flyovers to highlight some interesting features. Commentaries at Flowing Data and Visualising Data have been mostly positive. However, I'm always suspicious of 3-D views because of the extra step needed to translate values accurately in our minds and, for surfaces, the danger of missing information that's obscured. This graph works more as a backdrop for drill-downs into slices of interest rather than as a standalone data representation. While I think it adds value as a context, there is also a complexity cost to consider, and it's worth exploring other views.
Getting the data was refreshingly easy in this case. The US Treasury Department provides the data in an HTML table, and the Import HTML feature in JMP brings it into a data table nicely. Though there are more than 100 HTML <table> elements in the web page, JMP correctly identifies the one that contains data (the others are likely used for page layout). The only glitch was that the date values use two-digit years. Fortunately, JMP has a preference for how to interpret two-digit years, and after setting it to treat "90" as "1990," the dates come in correctly.
First, I'll try a 3-D surface view in JMP for comparison. Though the original looks beautiful in many ways, one feature I thought strange for a surface graph is the way the loan term lengths are treated categorically. That is, the spacing between 1-month and 3-month rates is the same as between 20-year and 30-year rates. I've seen yield curves drawn both ways, and it usually doesn't matter too much since the curve is often simplified to one of three states: rising, level or inverted. But given the context of the graph's title about “predicting the future,” it seems reasonable to look at the term length as a continuous value (that is how far into the future we're predicting).
Here is the surface plot in JMP. I could play with the lighting and smoothing, but this lets us get a sense of the effect of a continuous representation of the term length.
The call-outs of the original piece focus on the three possible 2-D profiles. Looking at the rate versus the term length with a separate curve for each date (yellow to red) produces an attractive view, even if not very informative.
With the coloring, we can sense the downward trend over time though we miss the dips, which are obscured. Possibly this could serve as a backdrop if a few years of interest were highlighted and labeled.
Here's the same view with only one out of every 40 days shown. At least, we can get sense of the older low rates, which were previously obscured.
Another way to slice the cube is to look at each term length's rate over time. This graph of two term lengths representing short-term and long-term rates over the last 25 years in 2-D gives a clearer view:
To me, this 2-D view is clearer than the same 2-D profile within the context of the 3-D view. It's easier to see both the steady declining trend in the long-term rate and where the short-term rates were higher than the long-term rates. Another embodiment of my favorite maxim, "Less is more."
Finally, here is a reproduction of the heat map of date versus term length and using the interest rate as the color. The cut-out for the missing 30-year rates in the mid-2000s is a good application of the "alpha hull" feature added to contour plots in JMP 11.
I usually like heat maps for 3-D data, but this one doesn't seem very informative. Maybe it's the amount of variation in the rate or the irregular spacing of the term length values, but it's harder for me to get a good sense of the data from this view. I think the core issue is that the interest rate is too important to be represented by color alone (necessarily imprecise).
One benefit of remaking graphs like this is you discover some of the many decisions the designers had to consider when making the published view. A few substantive decisions for this data:
Continuous versus categorical term length.
An appropriate level of smoothing, since there were too many days in the history to show every value.
Dealing with gaps in the data.
Deciding which of many interesting data features merit call-outs.
I saved my work as a JMP script (uploaded to the JMP User Community), so I could redo it easily with, for instance, new data or new smoothing parameters for experimentation. It takes a little more effort to create a reproducible script from an interactive data exploration, but I'm finding the practice to be rewarding.