A standards lab grade 20-bit DAC with 0.1ppm/°C drift: The dedicated art of digitizing one part per million

Significant progress in high precision, instrumentation grade digital-to-analog conversion has recently occurred. Ten years ago 12-bit digital-to-analog converters (DACs) were considered premium devices. Today, 16-bit DACs are available and increasingly common in system design. These are true precision devices with less than 1 least significant bit linearity error and 1 ppm/ ° C drift. Nonetheless, there are DAC applications that require even higher performance. Automatic test equipment, instruments, calibration apparatus, laser trimmers, medical electronics, and other applications often require DAC accuracy beyond 16 bits. 18-bit DACs have been produced in circuit assembly form, although they are expensive and require frequent calibration. Twenty and even 23+ bit DACs are represented by manually switched Kelvin–Varley dividers. In practice, the slave 20-bit DAC's output is monitored by the “master” LTC2400 analog-to-digital (A-to-D), which feeds digital information to a code comparator. The code comparator differences the user input word with the LTC2400 output, presenting a corrected code to the slave DAC. In this fashion, the slave DAC's drifts and nonlinearity are continuously corrected by the loop to an accuracy determined by the A-to-D converter and V REF . The sole DAC requirement is that it be monotonic. No other components in the loop need to be stable. This loop has a number of desirable attributes. As mentioned, accuracy limitations are set by the A-to-D converter and its reference. No other components need be stable. Additionally, loop behavior averages low-order bit indexing and jitter, obviating the loop's inherent small-signal instability. Finally, classical remote sensing may be used, or digitally based sensing is possible by placing the A-to-D converter at the load. The A-to-D's SO-8 package and lack of external components make this digitally incarnated Kelvin-sensing scheme practical.