Abstract
—Recent advancements in diffusion models have led to a significant breakthrough in generative modeling. The combination of the generative model and semantic communication (SemCom) enables high-fidelity semantic information exchange at ultra-low rates. In this paper, a novel generative SemCom framework for image tasks is proposed, utilizing pre-trained foundation models as semantic encoders and decoders for semantic feature extraction and image regeneration, respectively. The mathematical relationship between transmission reliability and the perceptual quality of regenerated images is modeled and the semantic values of extracted features are defined accordingly. This relationship is derived through numerical simulations on the Kodak dataset. Furthermore, we investigate the semantic-aware power allocation problem, aiming to minimize total power consumption while guaranteeing semantic performance. To solve this problem, two semantic-aware power allocation methods are proposed by constraint decoupling and bisection search, respectively. Numerical results demonstrate that the proposed semantic-aware methods outperform conventional approach in terms of total power consumption.