In this paper, we propose a new technique for halftoning color images. Our technique parallels recent work in model-based halftoning for both monochrome and color images; we incorporate a human visual model that accounts for the difference in the responses of the human viewer to luminance and chrominance information. Thus, the RGB color space must be transformed to a luminance/chrominance based color space. The color transformation we use is a linearization of the uniform color space L*a* b* which also decouples changes between the luminance and chrominance components. After deriving a tractable expression for total- squared perceived error, we then apply the method of Iterated Conditional Modes (ICM) to iteratively toggle halftone values and exploit several degrees of freedom in reducing the perceived error as predicted by the model.