距離

符号理論では、様々な距離が登場する。

というのも、雑音の大きさが距離によって表現されることと、雑音に様々な種類があるからだ。

例を挙げると、

・ビット反転誤りの大きさはハミング距離で表現される。

・挿入/削除誤りの大きさはLevenshtein距離で表現される。

・置換誤りの大きさはUlam距離などで表現される。

という具合。

と考えていたら、わからないことが出てきた。

・消失誤りとビット反転の復号誤りの大きさを表す距離って、名前付いてるの?

 

今まで見たことないし、聞いたこともない。

誰か、ご存じの方、教えてください。


削除符号論文メモ:2022/06/10

タイトル:Coordinate-ordering-free Upper Bounds for Linear Insertion-
Deletion Codes
著者名:Hao Chen
雑誌名:IEEE Transactions on Information Theory 
号数:-
DOI:10.1109/TIT.2022.3167662

Abstract:In this paper we prove several coordinate-ordering-free upper 
bounds on the insdel distances of linear codes. Our bounds are stronger 
than some previous known bounds. We apply these upper bounds to AGFC 
codes from some cyclic codes and one algebraic-geometric code with any 
rearrangement of coordinate positions. A strong upper bound on the 
insdel distances of Reed-Muller codes with the special coordinate 
ordering is also given.
タイトル:Explicit and Efficient Constructions of Linear Codes Against 
Adversarial Insertions and Deletions
著者名:Roni Con; Amir Shpilka; Itzhak Tamo
雑誌名:IEEE Transactions on Information Theory
号数:-
DOI:10.1109/TIT.2022.3173185

Abstract:In this work, we study linear error-correcting codes against 
adversarial insertion-deletion (insdel) errors, a topic that has 
recently gained a lot of attention. We construct linear codes over Fq, 
for q = poly(1/ε), that can efficiently decode from a δ fraction of 
insdel errors and have rate (1 - 4δ)=8-ε. We also show that by allowing 
codes over Fq2 that are linear over Fq, we can improve the rate to (1 - 
δ)/4 - ε while not sacrificing efficiency. Using this latter result, we 
construct fully linear codes over F2 that can efficiently correct up to 
δ < 1/54 fraction of deletions and have rate R = (1 - 54 · δ)/1216. 
Cheng, Guruswami, Haeupler, and Li [4] constructed codes with (extremely 
small) rates bounded away from zero that can correct up to a δ < 1/400 
fraction of insdel errors. They also posed the problem of constructing 
linear codes that get close to the half-Singleton bound (proved in [4]) 
over small fields. Thus, our results significantly improve their 
construction and get much closer to the bound.

削除符号論文メモ:2022/06/08

タイトル : Systematic Codes Correcting Multiple-Deletion and Multiple-
Substitution Errors
著者名 : Wentu Song; Nikita Polyanskii; Kui Cai; Xuan He
雑誌名 : IEEE Transactions on Information Theory 
号数 : -
DOI: 10.1109/TIT.2022.3177169

Abstract : We consider construction of deletion and substitution correcting codes with low redundancy and efficient encoding/decoding. First, by simplifying the method of Sima et al. (ISIT 2020), we construct a family of binary single-deletion s-substitution correcting codes with redundancy (s + 1)(2s + 1) log2 n+o(log2 n) and encoding complexity O(n2), where n is the blocklength of the code and s ≥ 1. The construction can be viewed as a generalization of Smagloy et al.’s construction (ISIT 2020), and for the special case of s = 1, our construction is a slight improvement in redundancy of the existing works.
 Further, we modify the syndrome compression technique by combining a precoding process and construct a family of systematic t-deletion s-substitution correcting codes with polynomial time encoding/decoding algorithms for both binary and nonbinary alphabets, where t ≥ 1 and s ≥ 1. Specifically, our binary t-deletion s-substitution correcting codes of length n have redundancy (4t + 3s) log2 n + o(log2 n), whereas, for q being a prime power, the redundancy of q-ary t-deletion s-substitution codes is asymptotically (4t+4s-1-⌊2s-1/q⌋) logq n+o(logq n) as n → ∞. We also construct a family of binary systematic t-deletion correcting codes (i.e., s = 0) with redundancy (4t-1) log2 n+o(log2 n). The proposed constructions improve upon the redundancy of the state-of-the-art 
constructions.