In information theory, channel simulation[1][2] (also known as channel synthesis[3]) refers to the setting where the encoder and the decoder wish to simulate a noisy channel using noiseless communication. It is the dual problem to channel coding which is the simulation of a noiseless communication channel using a noisy channel.[2]

The -letter setting

Consider a memoryless channel to be simulated, where the input alphabet is and the output alphabet is . A channel simulation scheme in the -letter setting consists of an encoding function (possibly randomized) and a decoding function (possibly randomized), where is the alphabet of the common randomness, and is the communication rate. Upon observing the input symbols which is an i.i.d. sequence following , and the common randomness , the encoder sends (consisting of bits) to the decoder. The decoder outputs based on . Our goal is to make appear almost as if they are the outputs of the given memoryless channel upon the inputs , in the sense that the average total variation distance between the distribution and the actual joint distribution of induced by the scheme is small. In the asymptotic setting where , we require the total variation distance to approach zero.

The central result of channel simulation is the reverse Shannon theorem, which states that if the common randomness is unlimited (we can choose any ), in the asymptotic setting , the smallest possible rate is the mutual information .[1][2] This is the dual result to Shannon's noisy channel coding theorem. The smallest possible rate for the case with limited or no common randomness has also been characterized.[3]

Channel simulation is closely related to lossy data compression and rate-distortion theory. One way to prove the achievability of the rate-distortion function is to simulate the channel , which requires a communication rate .[4] An advantage of channel simulation over traditional lossy compression schemes is that the distribution of the output can be controlled, which may improve the perceptual quality of the output.[5][6]

The one-shot setting

Interestingly, channel simulation can be peformed even if the length of the input is . In this setting, we often allow to be in a prefix code rather than having a fixed length. Several schemes have been proposed.[7][8][9] In particular, the strong functional representation lemma[9][10] states that, with unlimited common randomness, channel simulation can be performed exactly (the total variation distance is zero) with an expected length of being at most bits, which is close to the asymptotic rate .

Common approaches to the construction of channel simulation schemes include rejection sampling[7], importance sampling[11], dithering[12][13] and Poisson point process[9].

Applications

References

  1. ^ a b Bennett, Charles H.; Shor, Peter W.; Smolin, John A.; Thapliyal, Ashish V. (2002). "Entanglement-assisted capacity of a quantum channel and the reverse Shannon theorem". IEEE Transactions on Information Theory. 48 (10): 2637–2655. arXiv:quant-ph/0106052. doi:10.1109/TIT.2002.802612.
  2. ^ a b c Bennett, Charles H.; Devetak, Igor; Harrow, Aram W.; Shor, Peter W.; Winter, Andreas (2014). "The quantum reverse Shannon theorem and resource tradeoffs for simulating quantum channels". IEEE Transactions on Information Theory. 60 (5): 2926–2959. arXiv:0912.5537. doi:10.1109/TIT.2014.2309968.
  3. ^ a b Cuff, Paul (2013). "Distributed Channel Synthesis". IEEE Transactions on Information Theory. 59 (11): 7071–7096. arXiv:1208.4415. doi:10.1109/TIT.2013.2279330.
  4. ^ Winter, Andreas (2002). "Compression of Sources of Probability Distributions and Density Operators". arXiv preprint. arXiv:quant-ph/0208131. Bibcode:2002quant.ph..8131W.
  5. ^ Blau, Yochai; Michaeli, Tomer (2019). "Rethinking Lossy Compression: The Rate-Distortion-Perception Tradeoff" (PDF). Proceedings of the International Conference on Machine Learning. PMLR. pp. 675–685. arXiv:1901.07821.
  6. ^ Theis, Lucas; Wagner, Aaron B. (2021). "A Coding Theorem for the Rate-Distortion-Perception Function". Neural Compression Workshop at the International Conference on Learning Representations (ICLR). arXiv:2104.13662.
  7. ^ a b c Harsha, Prahladh; Jain, Rahul; McAllester, David; Radhakrishnan, Jaikumar (2010). "The Communication Complexity of Correlation". IEEE Transactions on Information Theory. 56 (1): 438–449. doi:10.1109/TIT.2009.2037047.
  8. ^ a b Braverman, Mark; Garg, Ankit (2013). "Public vs Private Coin in Bounded-Round Information". Electronic Colloquium on Computational Complexity. 130.
  9. ^ a b c Li, Cheuk Ting; El Gamal, Abbas (2018). "Strong Functional Representation Lemma and Applications to Coding Theorems". IEEE Transactions on Information Theory. 64 (11): 6967–6978. arXiv:1701.02827. doi:10.1109/TIT.2018.2854719 (inactive 4 March 2025).{{cite journal}}: CS1 maint: DOI inactive as of March 2025 (link)
  10. ^ a b Li, Cheuk Ting (2024). "Channel Simulation: Theory and Applications to Lossy Compression and Differential Privacy". Foundations and Trends® in Communications and Information Theory. 21 (6): 847–1106. doi:10.1561/0100000141.
  11. ^ a b Havasi, Marton; Peharz, Robert; Hernández-Lobato, José Miguel (2019). "Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters". Proceedings of the International Conference on Learning Representations (ICLR). arXiv:1810.00440.
  12. ^ a b Agustsson, Eirikur; Theis, Lucas (2020). "Universally Quantized Neural Compression". Advances in Neural Information Processing Systems (NeurIPS). Vol. 34. arXiv:2006.09952.
  13. ^ Hegazy, Mahmoud; Li, Cheuk Ting (2022). "Randomized Quantization with Exact Error Distribution". 2022 IEEE Information Theory Workshop (ITW). pp. 350–355.
  14. ^ Gergely Flamich; Marton Havasi; José Miguel Hernández-Lobato (2020). Compressing Images by Encoding Their Latent Representations with Relative Entropy Coding. Advances in Neural Information Processing Systems. arXiv:2010.01185.
  15. ^ Shah, Abhin; Chen, Wei-Ning; Ballé, Johannes; Kairouz, Peter; Theis, Lucas (2022). "Optimal Compression of Locally Differentially Private Mechanisms". Proceedings of The 25th International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research. Vol. 151. pp. 7680–7723. arXiv:2111.00092.
No tags for this post.