Abstract
An autonomous agent placed without any prior knowledge in an environment without goals or a reward function will need to develop a model of that environment using an unguided approach by discovering patters occurring in its observations. We expand on a prior algorithm which allows an agent to achieve that by learning clusters in probability distributions of one-dimensional sensory variables and propose a novel quadtree-based algorithm for two dimensions. We then evaluate it in a dynamic continuous domain involving a ball being thrown onto uneven terrain, simulated using a physics engine. Finally, we put forward criteria which can be used to evaluate a domain model without requiring goals and apply them to our work. We show that adding two-dimensional rules to the algorithm improves the model and that such models can be transferred to similar but previously-unseen environments.