Saturday, 31 July 2010

Playing with Perception: Panoramas in CryEngine

Photographers have experimented with panoramic cameras since the Daguerreotype. Panoramas provide a fresh perspective on landscape that is both disconcerting (places and structures appear to assume different and unfamiliar relationships to each other in a panorama) and yet familiarly complete (the whole landscape is there as we might see it when we change our view by turning on the spot).  Static panoramic photographs are also strangely beautiful, and often form striking artistic impressions of land and cityscapes.

Moving panoramas have become fashionable on the web (or perhaps have come and gone in fashion) as digital photography and software such as Apple's Quicktime VR supported their creation and display.  Google Streetview is the ultimate example of this, and assumes a strange beauty when deconstructed to it constituent images.

Games-based visualisations are at first glance locked to the fixed, first-person viewpoint of the avatar, a naturalistic, view, but limited in its artistic pretensions...



Following on from playing with the camera in CryEngine to simulate the effects of changing lenses on a 35mm camera, it struck me that combining a series of static "photographs" of the landscape using appropriate software (I used the free, and rather wonderful Autostitch) would enable the creation of static panoramic images.  So here are two, of Laxton Castle.  These are 360 degree panoramas each created from approximately 30 "photographs" of the landscape from a fixed location, but each with a different view.  The results are quite pleasing and provide a different perspective on landscape, escaping from the avatar, as it were.

Friday, 30 July 2010

Playing with Perception: Camera Control in CryEngine

Photographers know well that changing lens provides access to different creative effects, from the panoramic impression of an ultra-wide angle lens to the compressed perspective of a telephoto.  CryEngine's fixed 60 degree diagonal field of view is approximately the equivalent of the human eye, roughly the same as a 40mm focal length lens on a 35mm film camera.

While the avatar is restricted to this fixed view angle, playing about in Sandbox it is possible to create multiple cameras, each with its own field-of-view of up to 180 degrees. Here, by way of example, a range of fields of view from the same viewpoint, each with the approximate 35mm film focal length equivalent.

5mm (180 degrees)

24mm (80 degrees)

40mm (60 degrees)

200mm (10 degrees)

400mm (5 degrees)

1600mm (1.7 degrees)
The artistic possibilities are immediately obvious, and choosing a narrow field of view (reaching for that telephoto lens) has the useful and realistic effect of compressing both perspective and depth of field.  Interesting potential here for cinematic style zoom and dolly combinations.  Time to experiment.

Thursday, 29 July 2010

Redhill: Visual Impact on an Archaeological Landscape 3

A final version of the Redhill visualisation completed in CryEngine. Better in the end than I anticipated, but still problematic, particularly in terms of placing digital vegetation to mirror the real-world, which becomes crucial when local vegetation can radically alter mid-range views and visibility.  

8-bit CryEngine texture mask derived from a colour
air-photograph co-registered with the terrain model in ArcGIS
CryEngine terrain model, textured using
an air-photo mask, (top) without
and (below) with vegetation models.

One innovation that I am pleased with is using air-photographs to create Sandbox texture masks to rapidly add low-resolution landscape detail.  Here I've used a colour original air-photograph, co-registered with the DTM in ArcGIS, to create an 8-bit monochrome mask, with features textured on other layers such as trees, excluded.  The end results add useful detail to the CryEngine rendered terrain and makes placement of assets such as trees and hedges easier. 




I think based on these results, and the development of a fairly robust and rapid workflow from GIS to CryEngine I'd be confident to use CryEngine to take on more ambitious GIS derived visualisations, particularly where good quality asset models already exist.

Friday, 23 July 2010

Redhill: Visual Impact on an Archaeological Landscape 2

A quick update of work on the Redhill visualisation.  The model of Ratcliffe Power Station is taking shape, cooling towers in place, buildings and main chimney to add.  I'm quite pleased with the results of adding particle effects to the cooling towers, although the "smoke" needs to be white, not black, as it is in fact water vapour.




Overall the visualisation is working out better than I had anticipated, although the proof will still be comparison with ground level photographs.

Redhill: Visual Impact on an Archaeological Landscape

I've been working on a visualisation to attempt to illustrate the visual impact of a proposed development on an archaeological landscape.  Such work is usually the preserve of landscape architects working with high-end CAD and landscape design software (see for example Griffon, et al in press).  In this example I've attempted to use CryEngine as a low-cost visualisation tool to present a ground-level view of the landscape in order to assess likely visual impact.




We're looking at the hinterland of the Roman cult site at Ratcliffe on Soar, Nottinghamshire.  This is an archaeologically important landscape (protected as a Scheduled Ancient Monument) which is poorly understood and substantially affected by previous developments including railway, power station and major roads.  The issue is the extent to which proposed future development will intrude on what remains of the unspoilt landscape setting of the Roman site.

Initial work was carried out in ArcGIS, building a terrain model and a series of land use masks derived from Ordnance Survey mapping.

GIS derived terrain model and land use mask for use in CryEngine
The terrain and masks were then used in CryEngine to build a basic landscape with correctly located river, woodland, fields and building placemarkers.  First results are below.  The overall visualisation now needs considerable work to add existing buildings, the development and landscape detail in key areas.  Results will be tested against ground level photographs for general reliability.

A selection of images from the initial CryEngine visualisation showing
the rivers Trent and Soar, woodland and building placemarkers.
I'm not convinced that CryEngine (or my modelling skills) are up to the task, particularly with the approximations in vertical scaling inherent in CryEngine, but it's an interesting project, and worth a try.

Griffon, S. et al in press. Virtual reality for cultural landscape visualization. Virtual Reality. 

Wednesday, 21 July 2010

Dan Pinchbeck: Development-led Research in Games

I don't plan on posting other people's content on this blog, but this is too good to not pass on.  Dan Pinckbeck, Games Researcher at the University of Portsmouth and leading light behind the Chinese Room, developer of Dear Esther and Korsokovia, in a fascinating video lecture lays out his philosophy on game development and research. Set aside an hour and a half and listen in.  It is worth it.

Waun Ddu: Experiments in Terrain Texturing in CryEngine

One of the things that I have found a niggling problem in CryEngine (and for that matter in every game engine I have tried) is creating natural looking texture-mapped landscapes.  CryEngine offers a number of ways to achieve this, including painting textures directly onto the landscape, automatic mask generation based on elevation and slope and use of custom texture masks.  Using automatically generated masks one is restricted to simple slope or elevation classifications, based on the in-game terrain model (with its degraded resolution).  This is rarely wholly satisfactory, but at least better than the carpet-like effect of a single texture.

While it seems that texture variations based on products derived from the terrain model might be desirable, and represent real-world landscape change, as vegetation and landcover usually do vary in such a way, I'm keen to keep the original higher resolution terrain model as a base for these.  I've experimented with a variety of derived products from a GIS-based digital terrain model, in this instance based on 0.5m resolution airborne lidar data for the Medieval motte and Roman fortlet at Waun Ddu near to Llandovery in the Brecon Beacons.

DTM derived layers, hillshade, slope severity and solar radiation
Using ArcGIS I generated slope, hillshade and solar radiation maps of the terrain model.  These were used to create texture masks for CryEgnine (for a 512 x 512 terrain model Sandbox requires texture masks of 4096x4096 pixels in windows bitmap format).

The results are interesting.  Individual DTM derived texture masks produce more subtle terrain texturing than using Sandbox's built in tools.  The solar radiation mask (based on the amount of sunlight received at each location in the terrain model) is particularly useful for showing vegetation variations, which are often based on such factors.

Terrain variation based on solar radiation map
Terrain variation based on slope severity

The masks also work well together, with a plain base texture using slope to add rocky outcrops and solar radiation to vary vegetation

Combined base, slope severity and solar radiation texture mapping

This looks like a good way of adding naturalistic variations in terrain texture based on real topographic data and I shall be experimenting more with this in future.

Monday, 5 July 2010

Flood Modelling and Visualisation

Continuing to explore the possibilities of using PixelActive's CityScape for modelling flooding I've gone back to some old data documenting particularly impressive floods on the River Trent, in Leicestershire, in December 1954.


This wonderful RAF air-photograph, taken at about 11am on 15th December shows the Trent in full flood, with substantial overbank flooding affecting both the north and south of the river and the area to the south (Lockington Marshes) almost completely inundated.  A south-bound express train can be seen, most visibly by its train of steam, about to cross the Trent.  But how deep was the flooding.  Some while ago we calculated the extent of the floods by comparing the photographic evidence with recent lidar terrain data.

Lidar Terrain Model.

"Flooded" lidar (in blue) superimposed on air-photograph.

Together, these two allow us to duplicate the photographically documented flooding by selective colouring of the lidar data. By our estimates the river level at Lockington was up to 1.2m above its usual winter level.  So what about visualisation?  Using Cityscape and the lidar DTM (exported as a binary grid from ArcGIS, which overcomes the GeoTiff issues) it is possible to flood the terrain to roughly duplicate the level of inundation of December 1954.

Floods of 1955 simulated in CityScape

Moving from the 2D map view CityScape allows some nice 3D visualisation of the extent and character of these floods.

Looking north-east from an earthwork platform which  
probably served as a livestock refuge.

Overlooking Sawley Cliff Farm, isolated by floodwaters.

Sawley Cliff Farm was abandoned not long after the 1954 floods and the whole area remains prone to periodic flooding.  The Lockington marshes have formed the study area for intensive research into river channel formation processes and the relationship between archaeology and dynamic river systems. Does visualisation such as this contribute to such studies? I'm not sure at present, although it may  at least open up the area to other, more visually oriented audiences.  Of course these visualisations are simply the here and now (well almost), perhaps more interesting would be to recreate the river-scape of 2000 years ago, channel movement, vegetation, environment and settlement, and then explore the impact of the dynamic River Trent on that.