Tutorial Speakers

A-SSCC2017 Tutorial Speakers

Monday, November 6
TUTORIAL 1, 9:00-10:30, Emerald B Hall (Convention Center 3F)

ADC hybrids and ADC morphing

Michael P. Flynn
University of Michigan, USA

Biography

Michael P. Flynn received the BE and the M.Eng.Sc degrees from UCC in Cork, Ireland in 1988 and 1990. He received the Ph.D. degree from Carnegie Mellon University in 1995. He was with National Semiconductor in Santa Clara, CA, from 1993 to 1995 and from 1995 to 1997 he was a Member of Technical Staff with Texas Instruments, Dallas, TX. During the four-year period from 1997 to 2001, he was with Parthus Technologies, Cork, Ireland. Dr. Flynn joined the University of Michigan in 2001 and is currently Professor. His technical interests are in data conversion, RF circuits, serial transceivers and biomedical systems. Michael Flynn is an IEEE Fellow and a 2008 Guggenheim Fellow. He was Editor-in-Chief of the IEEE Journal of Solid-State Circuits from 2013 to 2016.

Abstract

Hybrid ADC architectures combine existing architectures to improve the energy efficiency or performance of ADCs. Many hybrids take advantage of the energy efficiency of the SAR ADC architecture to make other architectures more efficient. For example, a SAR ADC can be used as a sub-ADC to improve energy efficiency or extend resolution of pipeline or sigma delta ADCs. Another hybrid approach noise-shapes the quantization and comparator noise in a SAR ADC. Extended counting ADCs combine sigma delta and Nyquist ADCs, which are often SAR ADCs, to achieve faster throughput. The zoom ADC is another hybrid between a sigma delta and a Nyquist ADC. At the same time hybridization blurs the boundaries between ADC architectures. Some new hybrids are simply other architectures wearing new clothes. This tutorial explores and categorizes hybrid ADCs, discusses the advantages of different hybrid architectures, and explains how sometimes hybridization really morphs one ADC type into another.

TUTORIAL 2, 10:45-12:15, Emerald B Hall (Convention Center 3F)

Accelerator Design for
Deep Learning Training

Jinwook Oh
IBM Research, USA

Biography

Jinwook Oh received the B.S. degree in EE from Seoul National University, South Korea, in 2008, and MS and the Ph.D. degrees in EE from Korea Advanced Institute of Science and Technology (KAIST), South Korea, in 2010 and 2013. In 2014, he joined IBM Thomas J. Watson Research Center, NY, USA as a research staff member of the accelerator architecture and machine learning team under the Science and Technology department of IBM Research. He has been working on developing new computing architecture designs for algorithms/applications running on IBM Watson and P/Z processors that includes machine learning, analytics and computer vision.

Abstract

Deep Neural Networks (DNNs) achieve superior accuracy for many applications with high computational complexity using very large models which require 100s of MBs of data storage, exaops of computation and high bandwidth for data movement. In spite of these impressive advances, it still takes days to weeks to train state of the art Deep Networks on large datasets. This tutorial introduces a multi-pronged approach to address the challenges in meeting both the throughput and the energy efficiency goals for DNN training. It will incorporate a number of key features including the ability to support large-scale distributed DNN training tasks running on specialized (ASIC) hardware. Dataflow accelerators that support reduced precision computations and maintain high accelerator utilizations look promising as the industry looks to specialize beyond GPUs for Deep Learning.

TUTORIAL 3, 13:30-15:00, Emerald B Hall (Convention Center 3F)

Basics of Jitter in
Wireline Communications

Ali Sheikholeslami
University of Toronto, Canada

Biography

Ali Sheikholeslami has been a professor at the University of Toronto, Canada, since 1999. His research interests are jitter, analog and digital integrated circuits, high-speed signaling, and memory design. He has published over 70 journal and conference articles including several on jitter. He has served as the ISSCC Education Chair since 2013, and as a member of its wireline committee from 2007 to 2013. Since 2016, he has been the Education Chair and the Distinguished Lecturer Program Chair for the Solid-State Circuits Society and an elected member of its Administration Committee. Prof. Sheikholeslami has received numerous teaching awards from the Faculty of Applied Science and Engineering at the University of Toronto. He is a co-author of a book entitled Understanding Jitter and Phase Noise, to appear in print by early 2018.

Abstract

Jitter refers to deviation from ideal timing in clock and data transitions. In wireline communications, jitter reduces the timing margin available for clock and data recovery (CDR) circuits and poses significant challenges to signal integrity as the data rates march towards 64Gb/s/lane and beyond.

In this tutorial, we first review the basic definitions of jitter and its properties, the relationship between jitter and phase noise, and the effects of jitter on CDR and other building blocks of a wireline system. We then describe the concept of jitter transfer, jitter generation, and jitter tolerance curves, and the methods of characterizing, modeling, and simulating jitter. Finally, we present some recent works on jitter measurement and jitter mitigation techniques that are used to optimize the link performance

TUTORIAL 4, 15:15-16:45, Emerald B Hall (Convention Center 3F)

Emerging memory technology for IoT and AI applications

Takayuki Kawahara
Tokyo University of Science, Katsushika, Japan

Biography

Takayuki Kawahara is currently a Professor in the Department of Electrical Engineering at Tokyo University of Science, Katsushika, Japan. Sustainable electronics is the focus of his laboratory, which includes spin-current applications such as SOT-RAM. In the field of DRAM, his major contributions were low-voltage subthreshold-current reduction circuits. He also developed the world's first fully functional 2-Mb STT-RAM chip in 2007 and developed FD-SOI SRAM circuitry with back-gate control. From 1997 to 1998, he was a visiting researcher at the Swiss Federal Institute of Technology in Lausanne (EPFL). Prof. Kawahara is a recipient of the 9th (2009) Yamazaki-Teiichi Prize, the 2017 MEXT Commendation for Science and Technology, and he is an IEEE Fellow.

Abstract

We are enlightened through the progress of memory technology. It brings new materials and principles into the LSI field more frequently than any other technology. Moreover, commercial opportunities with considerable financial potential are possible.
Artificial intelligence (AI) and the Internet of Things (IoT) have been attracting attention. In this lecture, first, emerging memory devices such as phase-change RAM (PCRAM), magnetoresistive random-access memory (MRAM), resistive random-access memory (RRAM), and the status of large-scale integration are summarized. Typical spin-transfer torque (STT), spin-orbit torque (SOT), and voltage-controlled writing technologies are described in detail, especially with regard to MRAM. Next, prospective memories using examples for AI and IoT applications are shown in a cloud/server area and in a things/edge area. The development trends of AI and IoT are also surveyed. Design challenges to make use of non-volatility are emphasized in each application. Finally, a new movement in which memory devices evolve from wearable to implantable is discussed.