This study addresses the task of supervised cross-domain image generation, which aims to translate an image from the source domain to the target domain, guided by a reference image from the latter. The key difference between the authors setting and the recently proposed domain guided photogeneration is that their image generation is bidirectional, i.e. images are generated in both domains. Thus, they call it bidirectional cross-domain image generation. This novel task poses new challenges as it requires the model learning to decouple personalised and shared semantics of the two domains. For this purpose, they propose a framework to learn a feature space, which breaks into three parts, namely, two personalised semantic subspaces, which encode patterns that are unique for each domain, and a shared semantic subspace that captures the common patterns. The three subspaces are automatically decoupled through end-to-end training. Extensive experiments on the modified shoe and handbag datasets show that their framework can generate high-quality images in both domains.