1st effort. The surface generated from the Sept 2014 block produced a very irregular model, I had hoped contouring would improve the interpretation of the earthwork but it did not. The contour model was horrible, even after taking out the hedge-lines the 1640’s work is barely visible.
I used down-sampled imagery for this model to feed my impatience but I knew from the outset I’d be lucky to map the surface reliably. The weeds were knee high across almost the entire site. It was more DVM (Digital Vegetation Model) than DSM (Digital Surface Model). The contour model was something of a mess:
Wind stagger and oblique cover. The Feb 2015 ‘block’ (I think ‘spatter’ might be a better term) has a different problem, the wind was pretty hefty and this means the cover suffered from obliqueness. Despite using a low angled flight an even distribution was not achieved. The wind was from right to left in this view, it almost blew the camera clean off the site:
The pixel stretched ‘leaning’ trees on the right of the image reveal the impact of the obliqueness as the projection of the image texture is stretched over the surface. The surface however is close to the ground surface (certainly close enough for 1:200) so I decided to commit the 28 hours processing time the the full resolution images demanded for a high density mesh (they were converted to jpeg from the ARW format for ingestion into Stereoscan so some compression occurred).
The ‘first pass’ ortho is encouraging, based on a low density mesh the textured model looks good. A patient wait for a dense point cloud and even a low polygon mesh from it produces sharp geometry:Comparison between the 2 surfaces is revealing:
On the left is the surface derived from 22 uncompressed images processed from a dense point cloud. On the right is the same operation applied to 96 down-sampled images. An object lesson in the ‘crap in- crap out’ effect: processing more images makes no odds if they are down-sampled ones! The surface mesh from the uncompressed imagery reveals the ditch and bank features well, much better than the earlier, vegetation masked, version. The quality of the imagery is crucial, lighting, vegetation level, even the time elapsed between the first and last captured frame all have a bearing on the outcome in terms of shadow and sharpness.
The camera was set up at ISO 1250 in AV with the stop set at f5.6, this gave shutter speeds of between 1/2000th and 1/3200th. These are numbers that are only feasible with a full frame sensor and good glass. The high ISO and shutter speeds are the great advantages of the full frame format- the shots were sharp, nicely contrasty and even using a CPL filter did not present any problems. A little noise is visible in the shadows but that’s a small price to pay for the fast shutter speeds, truly the Sony RX1 is a wonder indeed.
Decimation & scaling the mesh. At a resolution good enough to detect hard detail (gate posts, the Alan Wiliams MG turret and the spigot mortar base) the mesh was way too big for CAD to handle. 2m+ faces is not really workable on even a high spec PC. This was something of a set-back, I wanted to be able to digitise in CAD from the best reso model possible but the limits were hit by the mesh size. I reduced the mesh polygon count to 500k faces by ‘quadratic edge collapse’ at a 0.5% reduction value in MeshLab and got a model I could pan, zoom and scale….just on the limit of CAD performance. Regen times were awful but I could apply a scale factor to resize the mesh to accord with distances from TST survey…and the digitising can begin.
Digitising the surface for contouring. Because I wanted to get a selective ‘edge biased’ contour pattern I selected points on the mesh surface manually. This could be automated but after the first effort the selection of nodes by feature looked a better bet to enhance the record of the soft historic features. I upped the point density around the edges of features and dropped it down to skip hedges and trees. The selection process is always where surveying gets interesting and the inherent bias towards the brief of ‘historic record’ or ‘topographic survey’ becomes apparent: selection is based on the end use of the data.
A touch screen interface is excellent for the digitising, with the mesh on a locked layer and snap set to end point it’s simply a matter of tapping away until the density looks ‘about right’.
Contouring. Once a decent size patch was covered the contour model was tested to check the work was on on track:
Smoothing and interval adjustments were made some dud points (hedge and tree heights) weeded out and the plan takes shape: the contours are generated in CAD by TheoContour with settings adjusted for interval smoothing and line type recontoured and checked against the points.
Next: finding the right point density, digitising the break-lines and developing the contour model…