Archive for the processing Category

A map, retro style

Posted in history, map, moon, processing, usa on November 2, 2010 by moonmapper

This is National Geographic map on the Moon, first published in February 1969 issue:

(reduced under fair use. You can see a zoomable version and buy a copy at the NatGeo store).

I have found a fascinating story dealing with the creation of this map, told by the cartographer who was actually working on it: Part 1 and Part 2. The work started back in 1964, before the detailed imagery of the Farside was even available (but after the first images of the Farside returned by Luna 3). As the Lunar Orbiter images were coming in, a special process had to be developed and employed in order to rectify the photographs and put them on a coordinate grid. It worked, although there was one major problem on the Farside:

The gut- killer was that there was nothing I could use to check my work. I had to work across the entire Far Side hoping everything would meet up correctly. Fortunately it did.

I humbly bow before people who were able to accomplish such things.

Contrast this with today, when I can get a gridded and calibrated altimetry dataset from the spacecraft over the Internet in a few minutes; spend one weekend writing processing software and produce a map of interesting area within several minutes — all that without even leaving my home. Or having a formal training in cartography, for that matter. Or, if I’m lazy, fire up VMA or LTVT and be done even quicker. I can even shade the map with titanium concentration in a few clicks, if I feel like it. To quote Phil Plait

We live in the future. Still no flying cars, but we live in the future.

Advertisements

A map in five easy steps

Posted in map, moon, processing on October 3, 2010 by moonmapper

Introduction

Astronomy is one of few fields of scientific endeavor where amateurs can make valuable contributions. Published astronomy data are a real treasure trove, and thanks to the wonders of computers and the Internet, you can go looking for treasures yourself. You can find a new class of astronomical object, or a lost Russian rover, or an alien base… oh, wait.

This post describes my current work flow of producing lunar maps from Kaguya LALT (Laser ALTimeter) dataset. I use GNU Octave, which is a free clone of MATLAB, but this page references original MATLAB documentation simply because it is better. The referenced functions work the same in both, the only difference being that MATLAB produces prettier plots.

Step 1. Obtaining the dataset.

For making height maps, we need an altimetry dataset. (Similarly, e.g. in order to draw a map of thorium concentrations, we need a dataset containing Th levels). Altimetry datasets are also referred to as DEMs (Digital Elevation Models). Basically, there are two kinds of datasets: time series data and gridded data. Time series data contain measured values (in our case, altitude) versus time. In order to obtain useful information (where exactly on the Moon a particular measurement was made), these have to be combined with data about the spacecraft location versus time. Gridded data are derived from time series data, and contain measurement results versus location. (Since a particular area can be measured multiple times, i.e. once per orbit, while other areas can be skipped, the gridded datasets are usually averaged and interpolated appropriately). Here we will work with gridded data.

NASA datasets are can usually be obtained through the Planetary Data System (PDS). For information on how to obtain Kaguya data, see this page.

Step 2. Loading the dataset.

An altimetry dataset can be represented either as an image (.IMG file) or a table (.TAB) file. A table contains (longitude, latitude, altitude) triplets, such as these in the Kaguya South Polar datataset:

12.453125 -80.00390625  1.907
12.484375 -80.00390625  1.895
12.515625 -80.00390625  1.894
12.546875 -80.00390625  1.894
12.578125 -80.00390625  1.907

(I currently work with tabbed data, because it can be loaded into Octave directly after the header is stripped).

In case when data is represented as an image, altitude is represented by a pixel value (color/brightness), while latitude/longitude information is encoded in pixel coordinates. In other words, such image, when opened using an image processing software, will represent an altitude map. Unfortunately, there is no single convention (map projection, pixel values) on how such image datasets are encoded. Fortunately, the people at UMSF have most of these worked out already. In principle, the image can be converted into the table like the one above and used in the following steps.

Altitude values are expressed with respect to a reference ellipsoid. In case of Moon, this is a sphere with the radius of 1737.4km. The latitude and longitude coordinates are expressed in a terrestrial convention: in degrees, positive values to north and east. For more information, see A Standardized Lunar Coordinate System for the Lunar Reconnaissance Orbiter.

Step 3. Projecting.

As most of us know, planetary bodies are spherical, while maps are flat. In order to render a 3D planetary surface into a 2D map, we must perform a projection. As stated above, our dataset is expressed in terms of latitude (λ) and longitude (Φ), i.e. (λ, Φ, z) triplets, where z is the altitude. First, we select a center point of the map (λ0, Φ1). Then we calculate the (x,y) coordinates on the projection (map) plane for each (λ, Φ) (latitude, longitude) pair in the dataset, producing (x, y, z) triplets. The particular equations for calculating (x, y) values from (λ, Φ) pairs depend on a chosen projection. For details, as well as a comprehensive discussion of advantages and disadvantages of different projection systems, see:  Snyder, P. Map Projections – a Working Manual. Or, get an accelerated course in orthographic projection at Wikipedia.

Step 4. Resampling.

We now have the (x,y,z) coordinates of the terrain surface. Here comes a tricky part, as we encounter two problems.

The first problem: we have too much data. For example, if we are mapping a 100km x 100km area, and the dataset it sampled at 10m resolution, that means we have 10’000×10’000 = 100’000’000 points. However, when drawing a map of a 100km x 100km area, we may find that 500m resolution would be more then satisfactory (that works out to 200 points along each axis of the map, which is generally enough to get a good understanding of the shape of the terrain). We solve that by randomly extracting 40’000 points from the dataset and throwing out the rest. (Yes, there are better methods to do this, but this one is the fastest and works surprisingly well.)

The second problem: plotting tools usually require z(x,y) values, with the (x,y) arguments describing points of a rectangular grid. (Such grid is constructed by using the meshgrid function). However, our (x,y,z) triplets do not form a rectangular (x,y) grid — they form a latitude/longitude grid which, in general, is no longer rectangular once projected on a plane. If you look at a (terrestrial) map, the lat/long lines are usually not straight, but curved; since our z values are expressed in terms of latitude and longitude, this poses a problem. (We have actually made the problem worse during the previous step, by sampling the data randomly). We solve this by using the griddata function. This function takes our projected (x,y,z) data and the grid coordinates (xi,yi) returned by meshgrid, and returns (xi,yi,zi) triplets (matrices, actually; see the meshgrid documentation if you really want to know), containing interpolated altitude values (zi) in the map grid points (xi,yi).

Step 5. Plotting.

Finis coronat opus. We pass (xi, yi, zi) to the plotting functions. Isolines can be ploted using contour function, or contourf if you want coloring. Alternatively, we can plot a 3D view by using surf. Add labels for named terrain features, descriptions, and you now have a map.

Caveat emptor

As you can see, drawing a map yourself is hardly rocket science. But we must remember that our maps will contain errors. There are several sources of error in the plotted data, which we must be aware of:

  1. Measurement errors (artifacts)
  2. Measurement resolution (horizontal and vertical)
  3. Missing data and interpolations used during production of the gridded dataset
  4. Projection errors (deformations)
  5. Errors introduced during downsampling the dataset
  6. Errors introduced during final re-griding and interpolation

The bottom line is that with this method, one can make a pretty good map relatively quickly, however making an authoritative map would require more advanced processing and strict error analysis. So if you want to use these maps to familiarize yourself with the lunar features, they are fine. If you want to use them to plan lunar observations, they should be fine as well. If you want to write a realistic sci-fi story, they should be fine as well. If you plan to use them for guiding a spacecraft — that would not be a good idea.