IEEE A-SSCC 2019

IEEE Asian Solid-State Circuits Conference

 Tutorials
November 5 (Monday)

Tutorial 1: On-Chip Millimeter Wave Voltage Measurements for Debugging, Built-in Self-Test and Self-Healing

Prof. Kenneth O

(Univ. of Texas, Dallas)

  • Date: November 4 (Monday)
  • Time: (coming soon)
  • Room: (coming soon)
Biography:

Kenneth O received his Ph.D. degree in Electrical Engineering and Computer Sci¬ence from MIT in 1989. From 1989 to 1994, He worked at Analog Devices Inc. developing sub-micron CMOS processes and high speed bipolar and BiCMOS processes. He was a professor at the U. of Florida, Gainesville from 1994 to 2009. He is currently the Director of Texas Analog Center of Excellence and TI Distinguished University Chair Professor at the U. of Texas at Dallas. His research group is developing circuits, components and systems operating at frequencies up to 40THz using silicon IC technologies. Dr. O is the Vice President and President Elect of the IEEE Solid-State Circuits Society. Dr. O has received the 2014 Semiconductor Research Association University Researcher Award. Prof. O is an IEEE Fellow.

Abstract:

Traditional high frequency testing using probes and external instruments in conjunction with de-embedding is unacceptable in high volume testing of millimeter wave (mm-wave) CMOS integrated circuits for the emerging consumer applications. The cost, test time, and the level of required sophistication for the traditional techniques are too high. In addition, debugging circuit behaviors by measuring internal node voltages using the traditional techniques is impractical because of the parasitics of probe pads and impact of probe landing. Lastly, due to use of components with smaller physical dimensions, mm-wave circuits are more sensitive to process variations. The first two challenges can be circumvented by utilizing high impedance broadband root mean square detectors that has negligible impact to operation as part of the circuit to enable measurements of millimeter wave voltages using low frequency measurements. The third can be mitigated by using the detectors along with on-chip tuning elements. In fact, this can allow use of higher Q circuits with improved power efficiency. This tutorial discusses devices in CMOS for implementing the detectors, design considerations and examples of how these detectors can be used in circuits operating at 30 to 300 GHz.

Tutorial 2: AI Computing: What it is about & How hardware can help it out

Prof. Masato Motomura

(Tokyo Inst. of Tech.)

  • Date: November 4 (Monday)
  • Time: (coming soon)
  • Room: (coming soon)
Biography:

Masato Motomura received B.S. and M.S. in 1985 and 1987, respectively, and Ph.D. of Electrical Engineering in 1996, all from Kyoto University. He joined NEC research laboratories in 1987, where he worked on various hardware architectures including multi-thread parallel processors, memory-based processors, and reconfigurable systems. From 2001 to 2008 he led research and productization of DRP (dynamically reconfigurable processor) that he invented. He was also a visiting researcher at MIT Laboratory for Computer Science from 1991 to 1992. He became a professor at Hokkaido University in 2011, and then a professor at Tokyo Institute of Technology from 2019 where he is currently leading AI Computing Research Unit. He won the IEEE JSSC Annual Best Paper Award in 1992, IPSJ Annual Best Paper Award in 1999, and IEICE Achievement Award in 2011, ISSCC Silkroad Award as the last author in 2018, respectively. He is a member of IEEE, IEICE, IPSJ, and EAJ.

Abstract:

Thanks to the enormous progress and success of deep neural networks (DNNs), computer architecture research has been regaining its past "excitement" again recently: a lot of architectural proposals have been proposed for the accelerated execution of the inference or training of DNNs, especially at the edge. Most of them have common architectural features: i.e., hardware-oriented, reconfigurable, domain-specific, and in/near-memory. This tutorial will try to supply 1) background knowledge to understand such DNN architectures, 2) insights on why they are happening, 3) what are the recent findings, and 4) where this architectural innovation will be heading. This tutorial will also cover four such research examples that the presenter’s teams have developed in the past few years: a) binary reconfigurable in-memory DNN accelerator (VLSI 2017), b) log-quantized and 3D-memory stacked DNN accelerator (ISSCC 2018), c) dynamically reconfigurable AI engine (VLSI 2018) (DRP), and d) an error-amortizing binary DNN architecture (FPT 2018).

Tutorial 3: Bringing back Pipelined ADCs in the Era of SAR ADCs

Prof. Seung-Tak Ryu

(KAIST)

  • Date: November 4 (Monday)
  • Time: (coming soon)
  • Room: (coming soon)
Biography:

Seung-Tak Ryu (M’06–SM’13) received the M.S. and Ph.D. degrees from Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in 1999 and 2004, respectively.

He was with Samsung Electronics in Kiheung, Korea from 2004 to 2007. From 2007 to 2009, he was with the Information and Communications University, Daejeon, Korea, as an Assistant Professor. He has been with the Department of Electrical Engineering, KAIST, Daejeon, Korea, since 2009, where he is currently an Associate Professor. His research interests include analog and mixed-signal IC design with an emphasis on data converters.

Prof. Ryu served as a Technical Program Committee (TPC) member of the ISSCC, and he is now serving as has been serving on the A-SSCC. He is now serving as an Associate Editor of the IEEE Solid-State Circuits Letters.

Abstract:

While SAR ADCs have become one of the most popular ADC architectures during the past decade owing to its compactness and power-efficiency in advanced CMOS processes, their limited conversion rate still remains as a major design bottleneck even with time interleaving (TI), especially in implementing high-resolution. Meanwhile, pipelined ADCs have been evolved to circumvent the opamp-based power-hungry residue generation. Often with SAR ADCs as sub-stages, recently, pipelined ADCs could achieve both high conversion rate and power efficiency. As pipelining also alleviates the calibration burden of channel-mismatch in TI ADCs owing to the reduced number of high-speed channels, this architecture needs to be improved continuously. Based on this perspective, this tutorial aims to provide knowledge on pipelined ADCs including the operational principle, the importance of residue accuracy and major error sources, history of design innovations focusing on key circuit blocks and architectural modification, and recent remarkable design examples.

Tutorial 4: Nonvolatile Logic and Computing-in-Memory for AI Edge Chips

Prof. Meng-Fan Chang

(National Tsing Hua Univ.)

  • Date: November 4 (Monday)
  • Time: (coming soon)
  • Room: (coming soon)
Biography:

A Distinguished Professor at National Tsing Hua University, Taiwan. Since 2010, Dr. Chang has authored or co-author more than 50+ top conference papers (including 18 ISSCC, 21 VLSI Symposia, 9 IEDM, and 5 DAC). He has been serving as an associate editor for IEEE TVLSI, IEEE TCAD, and a guest editor of IEEE JSSC. He has been serving on TPC for ISSCC, IEDM (Ex-com and MT chair), DAC (sub-com chair), A-SSCC, and numerous conferences. He has been a Distinguished Lecture (DL) speaker for IEEE Solid-State Circuits Society (SSCS) and Circuits and Systems Society (CASS), and the AdCom member of IEEE Nanotechnology Council. He has also been serving as the Program Director of Micro-Electronics Program of Ministry of Since and Technology (MOST) in Taiwan during 2018-2020, Associate Executive Director for Taiwan’s National Program of Intelligent Electronics (NPIE) during 2011-2018. He is the recipient of the Outstanding Research Award of MOST-Taiwan and other national awards. He is a Fellow of the IEEE.

Abstract:

Memory has proven a major bottleneck in the development of energy-efficient chips for IoT applications and artificial intelligence (AI). Recent emerging memory devices not only serve as memory macros, but also enable the development of computing-in-memory (CIM) for IoT and AI chips. In this tutorial, we will review recent trend of memory and AI+IoT chips. Then, we will examine some of the challenges, circuits-devices-interaction, and recent progress of silicon-proven SRAM and emerging-memory based CIMs for IoT and AI chips.