Approximate Computing: Solving Computing's Inefficiency Problem?

h IT WAS IN the early 1990s when I first came in contact with the IEEE Design & Test magazine (then it still had BComputer[ in its name). At that time we contributed to a special issue on what then appeared to be a promising emerging field. And indeed, as we realized years later, IEEE D&T was the right place to publish these new ideas. The special issue with all its contributions was instrumental in establishing a new research direction. The articles of that special issue enjoyed a large number of citations, industry adopted several of the basic techniques in the following years, and it is certainly not exaggerated to state that this special issue in IEEE D&T is regarded with having published some fundamental ideas in that field. Now, more than two decades later, I am finding myself responsible for the future of this magazine as its Editor-in-Chief. It is both an honor and obligation for me to lead this magazine. It will be my major goal to look for highly innovative research that has the merits for high impact and establishing new foundations for emerging research fields. I invite you to contribute and contact me about topics in design and test you believe belong in this category in future issues. This issue has its focus on Bapproximate computing,[ a research topic that is currently enjoying a remarkable rise in attention. One might state that computing has never been accurate anyway, so is there anything new at all? Scott Davidson in The Last Byte of this issue presents a journey that reminds us that computing indeed has never been exact or infallible. Still, there is something new about the constraints we face: as classical Dennard Scaling has come to an end, on-chip power densities are climbing to nonsustainable levels. As a consequence, many on-chip computing systems cannot operate at full performance levels at all times of operation. That effect has been coined as BDark Silicon[ (stay tuned for a special issue on that topic in a few months in IEEE D&T). One of the promises of approximate computing is to make computing more efficient and thus to avoid the aforementioned scenario. Increased efficiency in this context means to obtain about the same amount of computing at approximately the same accuracy and at comparable performance. Last year I had a conversation on the topic with my colleague Prof. Anand Raghunathan from Purdue University, who stated: BComputing systems are facing the challenge of diminishing benefits from scaling and increasing unreliability in devices. At the same time, the nature of workloads has profoundly changedVacross the spectrum from embedded devices to the cloud, more and more compute cycles are spent on applications such as recognition, search, and analytics, where Bcorrectness[ is defined as producing results that are good enough rather than a unique answer. Yet, computing platforms continue to be designed to exactly adhere to strict and rigid notions of correctness. This view of computing platforms is excessively restrictive and presents a significant opportunity for improving efficiency.[ The message is that we should not continue to aim at a maximum in accuracy at high cost in cases where this accuracy is not needed anyway. This requires, however, a thorough analysis of what Digital Object Identifier 10.1109/MDAT.2015.2509608