Tag Archives | Design

Many Users, Many Needs

Last fall I attended the annual meeting of the Mid Atlantic Coastal Ocean Observation Regional Association.

That’s MACOORA for those of you who love acronyms. And if you do, then you’ll be excited to hear that an underlying theme of the meeting was that MACOORA and MARCOOS were merging to form MARACOOS (say that 3 times fast), but that’s a story for another day.

The meeting provided a wonderful first-hand opportunity to hear from people who use ocean data in their daily lives. The last time I attended, about 5 years ago, the bulk of the participants were research scientists, who were in the early stages of setting up what would become today’s regional ocean observing system. But times have changed, and this time there was a great diversity of end-users.

As a product developer, I really enjoyed hearing user’s insights on the kinds of data they access, and perhaps more importantly, how they would like to utilize various datasets in the future. We don’t often have a chance to get out of our office much and interact with real live users, but it’s a crucial component of our work.

Here were a few of my key take-aways:

Recreational Fishermen have been active users of our Rutgers Sea Surface Temperature datasets for almost two decades now. They really appreciate the real-time access of easy to use imagery, especially higher resolution maps of local areas (i.e. fishing grounds). While temperature (and to a lesser extent chlorophyll) maps are their go-to images, what they really want to see are maps of convergence and divergence. Today, they infer areas of high and low divergence by looking for “fronts” in the data. Dedicated convergence maps (based on temp, chl or even current datasets), would point them more clearly to the areas they’re interested in.

Harbor Pilots have a very specific task to accomplish. They must steer large ships into the port of New York, avoiding bridges, channel edges, and other vessels all while battling the strong currents of the Hudson River. To do this, they need direct access to real-time information from buoys in the harbor. “If we can deal with exacts we can do a lot more and a lot safer.”

To do all of this, two key variables play a major role. Wind speed helps them direct boat operations. For example, is it too windy to move boats through narrow channels or to rough for pilots to transfer between vessels? And water level helps them know if they have enough clearance between the vessel’s keel and the channel, or whether they have enough clearance to go under a bridge. It can also let them know if they dock they’re headed to might be under water. (Apparently some in NYC do flood, and it doesn’t make sense to dock if the longshoremen won’t be able to offload the ship.) Having these data with an 8-12 hour forecast is ideal to plan upcoming operations like ship transfers.

The Coast Guard is chiefly interested in short term forecasts to warn pilots of upcoming conditions. Projections out to 96 hours would help them manage their assets. They are really interested in knowing more about the currents in the river.

Commercial Fishermen have needs very similar to recreational fishermen, although I gather they have a greater interest in longer-term forecasts to plan operations.

Fisheries Managers are more interested in historical datasets, in particular those data products that provide climatologies and statistical models of the distribution of physical an chemical variables and fish stocks.

And what are Educators looking for? To my mind, they are happy use any dataset or visualization that is easy to understand, and more importantly, one that makes a strong connection to a answering a cool science question.

Of the lot, the last one might be the tallest order.

Painting Temperatures by Number

Today is the Solstice, and pretty soon the dog days of sumer will be here. (Though, for those of us in the Northeast, we’ve already seen our fair share.) As things begin to heat up, I figured it would be appropriate to highlight some of the ways oceanographers visualize sea surface temperature data.

Temperature is a scalar variable, and its values are represented by continuous real numbers. Other variables types include categorical, vector or arrays (like RGB images), but we’ll save those for another day.

Temperature data collected at a singular point can easily be represented as numerical value (i.e. 76 degrees C), or if collected over a period of time, a time-series graph displaying temperature values over time can be easily drawn. Satellites, though, collect data over a large spatial area, meaning the dataset is inherently 3-dimensional. Each data point consists of a latitude (X position), longitude (Y position) and temperature (Z) value for each data point. The visualization challenge is figuring out how to display this 3D dataset on a 2D graph.

The most common solution is to create a pseudo-color image based on the temperature data. The trick is choosing the best colors to represent temperature values. The following are just a few examples.

The Scientist’s Rainbow

Perhaps the most common colormap, the rainbow (know as Jet in Matlab parlance) is often a scientist’s default choice. Like a Swiss Army knife, it is a good all-purpose tool. It is especially handy in helping subtle features stand out.

Sea Surface Temperature for June 2, 2011 from RU COOL

This Mid Atlantic image demonstrates how the rainbow palette allows you to easily distinguish small features within the range of a degree, even though the temperatures in the entire image range from 41 to 80 degrees Fahrenheit.

As useful as it is, the rainbow does have some major weaknesses. It is practically useless for colorblind viewers. In addition, it is important to keep in mind that non-scientists often associate rainbow representations with temperature, and only temperature. There is something intuitive about the red=hot and blue=cold extremes. Therefore, when creating images for public audiences it is unwise to use rainbow coloring for anything other than temperature, unless you enjoy confusing users.

An Alternate Choice

Sea Surface Temperature for May 2011 from NASA Earth Observatory

Given an infinite number of available colors (or at least 4 billion on modern displays), it’s easy to create an alternative color map. But balancing the need to highlight nuances in a dataset with accessibility for colorblind users while also keeping an image aesthetically pleasing is no small task.

Luckily, the artists at NASA’s Earth Observatory have come up with good alternatives over the years.

Hurricane Ready Temperatures

Hurricane-Ready Waters in the Atlantic, July 28, 2007

There is also no reason that colors need to be evenly distributed. If there is a scientific need to represent a dataset in a particular way, then adjusting the color map to highlight the message that needs to be communicated is certainly appropriate.

For instance, Tropical Storms develop best over waters that are above 80 degrees Fahrenheit. This image from NASA’s Earth Observatory, clearly shows those portions of the Atlantic Ocean that are ripe for hurricane development. For an alternate take, check out this recent image from NOAA’s Environmental Visualization Lab.

Temperature Anomalies

Sea Surface Temperature Anomaly for May 2011 from NASA Earth Observatory

Finally, sometimes it’s not the temperature value you want to represent, but the difference between the current temperature and a historical average. This is referred to as an anomaly, and it is often graphed slightly differently as it is a divergent dataset. Anomalies are typically represented as positive and negative differences, colored using red and blue respectively. Differences close to zero are left white.

This SST Anomaly image is from May 2011. These images are used to track El Nino and La Nina patterns, among other processes.

Well, that’s a quick roundup of some common examples. Do you have your own favorite way of coloring spatial maps of temperature or other datasets?

For more, check out the excellent primer Use of Color in Data Visualization by Robert Simmon.

What’s out there


Real-time data use in classrooms and other educational settings has gained a lot of attention in recent years. But the challenges to using and incorporating data effectively are still immense. Most of us are still in the prototype phase, trying to build new interfaces and curricula to find out what works. As a result, we spend a lot of time conducting needs assessments on what we know, what’s already out there and what we need to fulfill our goals and the needs of our audience.

A fellow observatory educator recently asked me what studies on using real-time data in ocean education I was aware of. So here’s a place to start.

This isn’t a long list and I’m sure there are more studies or needs assessment reports out there. Please comment below if you know of any.

But I think it’s clear that we all still have a lot more to learn.

Skip to toolbar