Abstract
Variable compute performance has been widely reported on for virtual machine instances of the same type, and price, on Public Infrastructure Clouds. This has led to the proposal of a number of so called ‘instance seeking’ or ‘placement gaming’ strategies, with the aim of obtaining better performing instances for the same price for a given workload. However, a number of assumptions made in models presented in the literature fail to hold for real large-scale Public Infrastructure Clouds. We demonstrate, using data from our experiments on EC2, the problems of such assumptions, discuss how these models are likely to underestimate the costs involved, and demonstrate why such literature requires a better Cloud Compute Model.