Simulate radiocarbon dating

Mid-points of confidence intervals or modal dates can be used, or the weighted mean (.However, it should be stressed that because the radiocarbon probability density of any calendar year is very low (typically never greater than 0.05), any point estimate of a radiocarbon date is much more likely to be “wrong” than “right”, underlining the need to work with the full probability distribution wherever possible.Defence of the technique has been maintained by many authors, however, as cumulative probability distributions can be seen as a useful way to explore trends in data aggregated from different sites, both in terms of environmental history ().A number of recent papers attempt to improve summed probability as a modelling technique by confronting criticisms that they are unprobabilistic representations of research interest, not a true pattern borne from fluctuating activity levels in the past. () call the “University College London (UCL) method”, where simulation and back-calibration are used to generate a null hypothesis, which can also factor the expected rates of both population growth and taphonomic loss of archaeological materials through time.

Archaeology is witnessing a proliferation of studies that avail of large archaeological–chronological datasets, especially radiocarbon data.

These are reviewed in this paper, which also contains open-source scripts for calibrating radiocarbon dates and modelling them in space and time, using the R computer language and GRASS GIS.

The case studies that undertake new analysis of archaeological data are (1) the spread of the Neolithic in Europe, (2) economic consequences of the Great Famine and Black Death in the fourteenth century Britain and Ireland and (3) the role of climate change in influencing cultural change in the Late Bronze Age/Early Iron Age Ireland.

Further details and the R code necessary to calibrate radiocarbon dates are given in the C datasets or age-depth models, it is sometimes useful to characterise each radiocarbon date by an average value, a “best guess” at a single point in time that describes the samples’ age.

A common application is the selection of a subset of dates from a larger database for more rigorous assessment using Bayesian phase models or one of the techniques described in the following texts.

Leave a Reply