Contouring 5m ESRI GRID data

Recently a 6km2 block of ESRI ASCII Grid 5m demo data has been offered for testing as benchmark for the performance of TheoContour the ‘no frills’ contouring in AutoCAD tool. I’m not sure of the origin of the data but it looks a lot like Lidar and certainly this set gives a good indication as to how TheoContour could handle a 5m post spaced Lidar swath if needed.

The point data is lovely but its dosen’t read like a map and even as a surfaced model it’s not really too useful for mapping so getting countours out of the points is a good first step to getting a map out of it. TheoContour is a great way of making that first step from data to map (getting from points to lines is something of a 1st step in most surveying processes these days!) but the sheer density of the data means a quite bit of care is needed when contouring such a big swath.

the ESRI grid data loaded as AutoCAD points

TheoContour has some slighltly tricky settings and the regular nature of the Lidar data gave me a good oportunity to get some comparative results with the sampling and sub-sampling controls.

The TheoContour contoring controls at 17/17

AutoCAD adressable memory limit: First of all its worth noting that in processing a surface conataining all 237,765 points is a bit like asking TheoContour to hold up the whole sky! It will hit the adressable memory limit in AutoCAD at some point in the calucation process, more than likely this will be at the contour processing end as this is when there is no escape from asking AutoCAD to do a lot of work in plotting the nodes on the contour lines and joining them up. In most cases TheoContour does pretty well at collating the point data so when getting to grips with a big job like this its the contour outputs that test the memory handling in AutoCAD.

First off TheoContour needs to collate the points, this took about 75s and the command line ‘thermometer’ gives you a clue as to what’s going on: running the collate command gives a command line report on completion:

Select points to include in the contour model.
Select objects: ALL
237765 found
Select objects:
Selecting and checking points.
Building Surface.
Checking Surface
Min point 269639.5000, 737484.5000, 105.0000
Max point 272659.5000, 739444.5000, 609.4700
Processed 237769 points,
Boundary Contains 4 points,
Created 475532 surfaces
Average Surface Size: 9.63
Min Surface Size: 5.69
Max Surface Size: 2040.63

This gives us a clue as how to handle this as contours. So the next step is to contour? not quite; with a big processing order like this there will be a limit to what can be done so the choice of contour interval, the smoothing step and processing strategy becomes VERY important at this point! Ignore the need for a suitable interval at your peril:

Plot and join 25million interpolated nodes eh?

Choosing data density over processing capacity: It helps to think of contour generation as ‘node building’ .  The height range is 504.47m from heighest to lowest point ; if we are to contour at a 1m interval we are aking for  503 contour lines, each one ‘joining up’ a rough maximum of  5  points per metre along its length and then interpolating nodes at the changes of direction of the isoline. So an estimate of the required nodes per contour lne might be based on the longest line (say the length of the perimeter in the worst case)  of 10,000m x 5 =50,000 nodes per polyline x 500 polylines =25 million nodes.   For AutoCAD to plot and join 25 million nodes may be possible but my system is best described as ‘average’ (i.e the mininium I can afford for an AutoCAD 20011 platform) so I think working on the basis of a 1m contour interval for the whole block at once is not viable.

A 5m interval would generate 100 contours, an 80% reduction in the amount of processing over 1m. Working with a 5m interval is going to be slow so to test the sampling settings I use a 10m interval. Straight ploylines (splines are very nice but AutoCAD will be pushed hard to generate the smoothing!) with no duplicate point checking further reduces the proessing overheads. Because of the gridded nature of the point data there is a bais to a ‘gridded’ or ‘stepped’ contour model:

10m interval sampling controls at 10/2

10m interval sampling settings at 10/8

By increasing the sub-sampling rate step by step the ‘stepping’ effect of the contours begins to be reduced. Each increase in the sub-sampling rate increases the processing time so there is a point where the sample rate needs to be reduced once a good sub-sampling result is achieved: the goal is to commit to the minimum amount of processing to get the smoothest line. The sub-sampling has the biggest effect on the stepping but aslo the biggest effect on the processing effort.

At 10/17 the contours have lost the stepping and even as straight polylines look like the terrain they depict.

5m 50m idx interval 10/17

With the sampling settings taking up the smoothing the straight polylines work well. The nature of contours is such that the nesting of curves, the pinching of incised features and the moiree effect of lines on convex and concave slopes lead the eye to read the surface as a tactile object. TheoContour generates the lines as polylines so the editing of the lines is not too difficult. Introducing an appropriate  lineweight for the index contour and some sensible colours by layer gets the model behaving cartographically:

5m interval index at 50m

With patience a 2m interval is possible:

2m interval 10m index 10/17

Of course the contours are true 3d enitites:

As you would expect the polylines sit at their correct Zs

2m interval 10m index 10/17

Download TheoContour for Bricscad here.

Dowload TheoContour  (as part of the TheoLt Suite)  for AutoCAD here.

Photography for PhoToPlan3D: the 3×3 rules

The following text is adapted from a paper presented by Peter Waldhäusl (University of Technology, Vienna, Austria) and Cliff Ogleby (Dept. of Geomatics, University of Melbourne, Australia), at the ISPRS Commission V Symposium “Close Range Techniques and Machine Vision” in Melbourne, Australia, 1994. Simple rules that are to be observed for photography with non-metric cameras have been written, tested and published at the CIPA Symposium in Sofia in 1988.

• Measure some long distances between well-defined points.
• Define a minimum of one vertical distance (either using plumb line or vertical features on the building) and one horizontal.
• Do this on all sides of the building for control.
• Ideally, establish a network of 3D co-ordinated targets or points.

• Take a ‘ring’ of pictures around the subject with an overlap of greater than 50%.
• Take shots from a height about half way up the subject, if possible.
• Include the context or setting: ground line, skyline etc.
• At each corner of the subject take a photo covering the two adjacent sides.
• Include the roof, if possible.
• No image should lack overlap.
• Add orthogonal, full façade shots for an overview and rectification.

Stereo-pairs should be taken:
• Normal case (base-distance-ratio 1:4 to 1: 15), and/or
• Convergent case (base-distance-ratio 1:10 to 1: 15).
• Avoid the divergent case.
• Add close-up square on stereo-pairs for detail and measure control distances for them or place a scale bar in the view. Check photography overlaps at least 60%.
• If in doubt, add more shots and measured distances for any potentially obscured areas.
• Make sure enough control (at least 4 points) is visible in the stereo image area and at least 9 control piubnts in the single image area.

• Fixed optics if possible. No zooming! Fully zoom-out, or fix the focus using adhesive tape or avoid zoom optics altogether. Do not use shift optics. Disable auto-focus
• Fixed focus distance. Fix at infinity, or a mean distance using adhesive tape, but only use one distance for the ‘ring’-photography and one distance for close-ups.
• The image format frame of the camera must be sharply visible on the images and have good contrast.
• The true documents are the original negatives or digital ‘RAW’ equivalents. Use a camera with a highest quality format setting.

Use the best quality, highest resolution and largest format camera available:
• A wide-angle lens is better than narrow angle for all round photography. Very wide-angle lenses should be avoided.
• Medium format is better than small format.
• Calibrated cameras are better than not calibrated.
• Standard calibration information is needed for each camera/lens combination and each focus setting used.
• A standardised colour chart should be used.

Consistent exposure and coverage is required.
• Work with consistent illumination: beware deep dark shadows!
• Plan for the best time of day
• Use a tripod and cable release/remote control to get sharp images.
• Optimise shutter speed and aperture by using a ‘slow’ ISO setting ..
• Use RAW or ‘high quality’ or ‘fine’/’super fine’ setting on digital cameras.
• Test and check the exposure using the histogram to understand the balance needed.

Make proper witnessing diagrams of:
• The ground plan with the direction of north indicated
• The elevations of each façade (1:100 – 1: 500 scale). Show the location of the measured control points.
• Photo locations and directions (with frame number).
• Single photo coverage and stereo coverage.
• Control point locations, distances and plumb-lines.

Include the following:
• Site name, location and geo-reference, owner’s name and address.
• Date, weather and personnel. Client, commissioning body, artists, architects, permissions, obligations, etc.
• Cameras, optics, focus and distance settings.
• Calibration report, if available.
• Description of place, site, history, bibliography etc.
• Remember to document the process as you go.

Data must be complete, stable, safe and accessible:
• Check completeness and correctness before leaving the site.
• Save images to a reliable site off the camera.
• Save RAW formats to convert into standard TIFFs. Remember a CD is not forever!
• Write down everything immediately.
• Don’t crop any of the images – use the full format.
• Ensure the original and copies of the control data, site diagrams and images are kept together at separate sites.

Although these rules were devised for ‘classical’ (stereo) phtogrammetic recording they hold true for photocover in general. The rules have been modified to suit digital camera work and have not incorporated the use of the Exif data in processing the images.

PhoToPlan3D: Understanding precision

PhotoPlan3D= Simple photogrammetric plotting in AutoCAD!

PhoToPlan3D by kubit makes 2 useful aspects of photogrammetry available for AutoCAD users: 1st  plotting from 2 (or more) images by intersection in 3D and 2nd plotting by projection from single images on to a plane. When used together the PhotoToPlan3D toolset enables the use of non metric imagery for surprisingly good results as a data source in AutoCAD.

Ther are 3 main aspects to getting the best out of PhoToPlan 3D: Control, Image condition and image orientation.

The first thing is to get a grip on is the relationship between the image and the control required to orient it. Let’s look at the control 1st.

The distribution of control points should be as wide as possible but should not rely on clusters in the centre of the area to be mapped. The number of control points is on the high side if you are used to using the minimum of 4 in PhoToPlan2D, the minimum of 9 points seems to be a bit tiresome but the benefits of the higher density of control will be worth it! If you imagine the task as filling a ‘data hole’  a good distribution of points would be not only around the edges of the hole but in the hole itself too! There are 3 basic rules on control points:

1. The more the merrier, 9 points per image is the minimum!

2. A wide distribution is better than not and

3. The closer to the data area the better

The condition of the imagery has a big effect on the precision achievable; this can be characterized in 4 elements:

1. The disposition of the images. If the images are captured from camera positions too close together or too far apart the intersection of rays between them and the object space will be either too obtuse or too accute, it helps to think in terms of a 90 to 60 degree intersection as ideal. If the stand-off from the camera to the subject has too great a variance the results will be poor. PhoToPlan3D will give you results from poorly conditioned imagery but the precision will proportionally poor too! Big X,Y rotations between images are not a problem although its helpful to resolve this at insert.

2. The convergence of the images. You will get nowhere if the images are not covering the subject area! Also if they point at the subject too obliquely you will find it hard to find common points between the images when plotting.

3. The consistency of the images. Ideally image pairs should be from the same camera and lens. It’s difficult (but not impossible) to achieve good intersections when one image is taken with a wide angle lens and the next with a telephoto.

4. Camera calibration information. If camera calibration data is available it should be used; PhoToPlan3D works with an ‘inverse’ calibration based on an assumed focal length, if the true focal length can be used the precision is better.  Relying on the image EXIF data is not enough.

Image condition: this is a poor case as there is not only a big’ Z’ shift, but also severe tilt between the 2 images.


The quality of the image orientation depends on a combination of the image condition and the control quality. The image orientation panel is very flexible and it can be used to experiment with control point configurations point by point to see what gives the best result.

The image orientation panel. You can add more control points, take them in and out of the calculation and see the worst case point at any time after adding the 1st 9 points.

It’s worth noting that the constellation of control points used will change as new points are added, the precision (as reported by the standard deviation) may well decrease beyond a certain number of points and the reported worst case point will change: experimentation here pays off. The effect of adding new points can be to dilute the precision!

Tip if you are using imperial units in AutoCAD the ‘Deviation’ results will not be easy to read as the scores will be fractions of an inch. Set the AutoCAD units to decimal with at least 4 decimal places to see the scores!

Refining the image orientation The initial 9 points for the first image may be the only points you have but if you are working with a point cloud or existing 3D survey data its well worth adding more points and testing  which configuration of points works best. Its simply a case of holding off hitting the ‘Create orientated image’ button until the standard deviation is as low as possible.

Let’s look at the orientation procedure in detail:

The image pair and the control: in this case the control is from an existing survey A number of control points have been marked ready for image orientation. (data suppled by English Heritage, used with permission)

With the 2 images loaded on a convenient UCS and the control points marked the next step is to open the  Image Orientation panel and begin to add control points to the orientation, this is done by point matching between the image and the control.

Tip: the deviation scores are not displayed until the destination image name has been set.

Adding control points to the orientation.

Note on precision: The standard deviation (SD) refers to the performance of the fit between the image and the control geometry. If the SD of two  image orientations is 1cm each it can be expected that the resultant precision when plotting between then will be roughly double the SD i.e: 2cm per point.

The orientation will complete with the Create orientated image function and it is presented in CamNav view:This example is a good one to look at the registration across the image area. The control points are mostly in the centre and left side of the image, the fit is poor around the edges ( particularly at the apex of the pediment on the right). The distribution of control points could be better!

Reducing the plan rotation  in the pair improves precision. By increasing the density of control points and choosing images with less tilt between them the precision is improved but the quality of the pair in terms of parallelism with the subject seems to have a big impact on the precision of digitised lines. Here I have loaded 2 images with more vertical displacement but less convergence :Despite the deep Z shift between them these two images have a better parallelism. The orientation SD was refined by selecting 18 points and deactivating the worst fit point and then adding new points (this worked better than juggling the relative fit of the 1st 18) the SDs came in at 10 and 9mm. Here are 2 pairs with different characteristics:

The relative plan rotation of the images seems to be more important then the tilt or depth displacement. Surprisingly in the 2 pairs above I got far better results with pair 1 than pair 2, pair 2 is a significantly convergent case whereas pair 1, despite the big Z shift and vertical tilt has better parallelism.

By repeating the image orienatation and testing lines (in blue in the screen shot) against the control point psositions I can gain confidence in the pair. Pair 2, despite comfortable SD scores  (11 and 12mm), plots consitently dispalced by 2-3cm.

The difference in the performance of the 2 pairs can be explained by the planar orientation; the best pair were taken in parallel and worked well even though they have a big stand off variation.

It’s a good idea to experiment to find the best image condition. One of the  great strengths of PhoToPlan 3D is the flexibilioy you have to try things out and see what the resukts are like before you commit to plotting. Having got the orientation refinement method practised and making sure there is an abundance of control and imagery available means the optimum image condition can be found. The classic 3X3 rules (contained in this pdf) still hold for photogrammetric capture but it seems that PhoToPlan3D works best with near ‘square on’ imagery!