Independent component analysis (ICA) is a useful extension of standard principal component analysis (PCA). The ICA model is utilized mainly in blind separation of unknown source signals from their linear mixtures. In some applications, the mixture coefficients are totally unknown, while some knowledge about the temporal model exists. CDMA (code division multiple access) is an example of such an application; only the code of the mobile phone user is known, while the codes of the interfering users are unknown. In this case, linear methods such as the matched filter fail to estimate the parameters. In this work, we introduce two learning source separation methods for estimating the CDMA symbols. The first method is based on competitive learning while the second approach is a batch version of a neural independent component analyzer. The performance of the first method is based on the fact that the data have a linear form where the coefficients (sources) of the linear basis vectors are binary symbols. Due to the very nonlinear structure of the source process (symbols are clustered), the system allows oversaturation, i.e. the number of binary signals can be larger than the code length. The second approach is a batch version of the neural independent component analyzer. Simulations show that one can estimate the symbols without any knowledge of the chip sequences.
[1]
Christian Jutten,et al.
Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture
,
1991,
Signal Process..
[2]
Behnaam Aazhang,et al.
Subspace-based channel estimation for code division multiple access communication systems
,
1996,
IEEE Trans. Commun..
[3]
Stefan Parkvall,et al.
Propagation delay estimation in asynchronous direct-sequence code-division multiple access systems
,
1996,
IEEE Trans. Commun..
[4]
Erkki Oja,et al.
A class of neural networks for independent component analysis
,
1997,
IEEE Trans. Neural Networks.
[5]
Petteri Pajunen,et al.
A competitive learning algorithm for separating binary sources
,
1997,
ESANN.