Abstract
Limitations or constraints in signal acquisition systems often lead to signals that are measured in a compressive manner, i.e. involving dimensionality reduction, or information that is compressed, distorted, or lost. Recovering a signal from compressive measurements is thus an inverse problem which can be challenging to solve. A popular assumption is that the original signal is sparse in an appropriate domain, i.e. can be represented as a linear combination of a few basis signals – called atoms – from a dictionary. In this thesis we propose contributions in the field of sparse signal recovery from compressive measurements. We first address the problem of recovering a signal from linear compressive measurements, such as noisy or missing measurements. Interpreting the sparse decomposition problem as a Maximum-Likelihood estimation problem, we propose a novel algorithm that solves a Maximum-A-Posteriori estimation problem of the sparse coefficients, using prior knowledge about the first order statistics of the coefficients. We then address the recovery of sparse signals from nonlinear measurements, such as clipping or quantization. Recovering a signal from clipped or quantized measurements is often formulated as a constrained sparse decomposition problem, which can be difficult to solve. In this thesis we propose a novel framework to tackle sparse recovery from clipped, quantized, as well as linear measurements in a unifying fashion. We then show that many well-known sparse decomposition algorithms proposed in the linear case, such as iterative pursuit or proximal algorithms, can be extended to the nonlinear scenario. Finally, we study dictionary learning from nonlinear measurements. It has been shown in the context of linear inverse problems that learning – or adapting – the dictionary to the observed data often leads to a better reconstruction compared to using a fixed dictionary. In this thesis we propose algorithms for dictionary learning from nonlinear measurements, such as clipping and quantization.