Wednesday, 23 June 2010

Visualising Terrestrial Laser Scanning

The idea of pointing a terrestrial laser scanner at the ground, rather than at some sort of building or structure, may not be entirely new, but the question remains what to do with the huge quantities of data such idiocy generates.

The data visualised here is of the north-eastern entrance of the Iron Age hillfort at British Camp, Little Malvern and was collected as part of the coursework of one of our postgraduate students taking the MA programme in Landscape Archaeology, GIS and Virtual Environments.  He used a Leica HDS3000 TLS, hauled to the top of the hill twice (it failed to work the first time) and collected 3.2 million xyz data points from three survey stations.  The point data was stitched together in Leica Cyclone, but then, what?




Effective data exploration was undertaken in the hugely capable Quick Terrain Modeler (usually used for airborne lidar data), but decimating the data to allow visualisation in ArcGIS and ArcScene produced disappointing results. Here are the same data visualised using CryEngine 2 and a little imagination.


Fair I think to say that the metrical accuracy of the CryEngine model is way below that of ArcGIS or QT Modeler.  X-Y co-ordinates are fine, although limited to 1m spatial resolution, somewhat below the centimetre level of the original data. The principal problem with CryEngine is the Z scaling, limited to a range of 256 increments from lowest to highest value (that is the 256 shades of grey possible in an 8-bit heightmap).  At British camp the vertical range in the original data is 90m, so in our CryEngine  model each heightmap increment is equal to 35cm with the inevitable compromise in resolution.

Nonetheless, the dressed terrain model, with texture, vegetation and effects, is a pleasing, interactive rendering of the landscape which capture the original more effectively that the metrically correct, but dry 2D visualisations.




So what is going on here?  For my money, these lower resolution game-based visualisations more truely present landscape in the way our brains cognitively interact with the world around us.  We don't see millions of data points, we see impressions of landscape, tussocky grass, the fleeting movement of birds and the interplay of light and shadow on the land.  So, feed the mind with familiar fare and happy sensory engagement results.  And what of those lost millions of points?  Still there in the ether for analysis if needed...