Trends and Challenges in Soc Design v3
Trends and Challenges in Soc Design v3
Trends and Challenges in Soc Design v3
A SEMINAR REPORT
Submitted by
NAGA VARDHAN .I
NITHEESH KUMAR .K
SAI HEMANDRA .B
of
BACHELOR OF TECHNOLOGY
in
JANUARY 2021
BONAFIDE CERTIFICATE
Certified that this seminar report entitled “TRENDS AND CHALLENGES IN SOC
DESIGN” is the bonafide work of “ NAGA VARDHAN.I (Reg. No.18UEEC0158),
NITHEESH KUMAR.K (Reg. No.18UEEC0193) and SAI HEMENDRA .B.V (Reg.
No.18UEEC0057)”.
We express our deepest gratitude to our respected Founder President and Chancellor COL. PROF. VEL.
Dr. R. RANGARAJAN, Foundress President Dr. SAGUNTHALA RANGARAJAN, Chairperson Managing
Trustee and Vice President. We are very thankful to our beloved Chancellor COL. PROF. VEL. Dr. R.
RANGARAJAN for providing us with an environment to complete the work successfully.
We would like to express our gratitude towards our Vice Chancellor PROF. Dr. S. SALIVAHANAN for
his kind cooperation and encouragement.
We obligated t o our beloved Registrar Dr. E. KANNAN, for providing i m m e n s e support in all our
endeavors.
We thankful to our esteemed Director Academics Dr. A. T. RAVICHANDRAN, for providing immense
support in all our endeavours.
We extremely thankful and pay our gratitude to our Dean Dr. V. JAYASANKAR for enormous care and
encouragement towards us throughout the seminar.
It is a great pleasure for us to acknowledge the care of our Head of the Department Dr. R. S. VALARMATHI
for her valuable suggestions which helped us in completing the work in time and we thank her for being
instrumental in the completion of the seminar with her encouragement and unwavering support. We thank
our department faculty, supporting staffs for their help and guidance to complete this seminar.
NAGA VARDHAN.I
NITHEESH KUMAR.K
ABSTRACT
1. INTRODUCTION
6. VERIFICATION TRENDS
1. Simulation technologies
7. CONCLUSION
8. REFERENCES
ABSTRACT
INTRODUCTION
The technology revolution has had a profound impact on our daily life. For example,
personal communication will never be the same. About 80 years ago, Bell Laboratories
demonstrated the first mobile phone prototype which had to be mounted on a car. About
30 years ago, the first hand-held mobile phone became commercially available. Today,
the standard cellular phone is compact, user friendly, and packed with functionality.
Moreover, many optional features are available, such as personal digital assistants and
global positioning systems. In fact, more and more new applications are ready to harness
the technology revolution. Prominent examples include third-generation wireless
communication and a wide range of popular lifestyle consumer electronics (such as,
digital cameras, video camcorders, digital televisions, and set-top boxes). According to
In-Stat/ MDR, the market for smart appliances in digital home experienced a 70%
compound annual growth rate from 2001 to 2006. Moving forward, Gartner Market
Report predicted that $500 millions market for SoC in 2005 will grow over 80% by 2010
. The annual growth rate is about 2 faster than general-purpose microprocessors . Such a
change is largely due to the advances in device technology, which enable us to put billions
of transistors on a chip for almost unlimited processing capability.
Figure 1 shows that in the past 40 years,
we have been able to put about a million times more transistors onto a chip (keeping pace
with Moore_s Law The first microprocessor had a couple of thousand transistors with
functionalities limited to basic logic/arithmetic processing. In contrast, a modern SoC can
have billions of transistors, supporting a wide range of functions (processors/ controllers,
application-specific modules, data storage, and mixed-signal circuits). Thanks to ever
increasing large-scale integration, SoC is able to meet the increasing computational
demand by new applications. We face many formidable challenges, however. System
integration means more than simply squeezing components onto a chip. There is a huge
gap between what can be theoretically designed and what can be practically implemented.
New requirements on performance, power consumption, and rapid design cycles
necessitate we revisit some fundamental design principles. For instance, one of the
persistent challenges is how to deploy in concert an ever increasing number of transistors
with acceptable power consumption. Another challenge is that in order to meet the
increasingly demanding requirements in multimedia applications, SoC must provide
functional flexibility as well as processing capability. There are many other deeper issues
related to SoC research. In this paper, we address the driving force behind SoC designs,
the design flow, current trends, and future challenges— from an architect_s perspective
Figure 2 serves as an outline of this paper. 1. The top level shows the basic driving forces
behind SoC designs, including (1) integrating more transistors within (2) a short period of
time to provide (3) high performance and (4) flexibility, as mentioned earlier. 2. The
second level shows a divide-and-conquer strategy which was adopted to create flexible
SoC products with short design cycles. To elaborate on this strategy, Section 2 in this
paper lists the typical approaches, including hardware-software partitioning/co-design,
programmable core, IP design and reuse, and vertical integration. 3. The third level
displays emerging issues, including power consumption, memory bandwidth/ latency, and
transistor variability, which have a major impact on system designs. These are addressed
in Section 3. 4. Recognizing these emerging issues, the fourth level shows how modern
system design has incorporated novel power/thermal management, multi-processor SoC,
reconfigurable logic, and design for verification and testing. These are covered in Section
4. 5. As depicted in the fifth level, we anticipate further innovations on scalable, reusable,
and reliable system architectures, IP deployment and integration, on-chip interconnects,
and memory hierarchies in the near future. These are addressed in Section 5.
CHAPTER 2
With the rapid development cycles in today_s multimedia algorithm research, there
are a number of approaches to take to meet the diverse computing requirements while
achieving high computational and energy efficiencies. A predominant approach is to
incorporate programmable cores—90% of the SoC products in 130 nm technology
have one programmable core. With programmable cores, several different algorithms
can be executed on the same hardware, and the functionality of a specific system can
be easily upgraded by a change in software. This will create a versatile platform that
can follow new generations of applications and standards. Some popular
programmable core implementations use RISC and/or DSP cores [5, 16, 32–34, 47];
or some even extend existing programmable processor cores with multimedia
enhancements
2.2. HARDWARE-SOFTWARE CO-DESIGN
Many applications can be divided into two portions: (1) complex data-dependent and
decision-making procedures, and (2) computationally intensive and regular tasks.
While using a programmable controller to implement control-intensive tasks, we can
employ fast dedicated hardware modules to perform regular computation-intensive
tasks. Therefore, for almost all multimedia SoC designs, there is a common codesign
methodology, as shown in Fig. 4. The methodology includes (1) software and
hardware partitioning, (2) software and hardware synchronization, (3) algorithm
optimization, (4) software optimization, and (5) dedicated hardware design. The
methodology is applicable to video decoders, audio decoders, and other future
multimedia applications.
2.3. IP REUSE
In order to deploy successful SoC products in a timely manner, we must build new
SoCs from circuit blocks that have been designed for previous ones. By using existing
and high-performance IP, SoC designers not only can save time and resources, but
also can create a mind-blowing solution that users want. Additionally, IP does not
just refer to hardwired logic or hardware design. Software development starts to be
on the critical path in time-to-market for SoC. Therefore, it will be great to be able to
quickly assemble new software stacks (e.g., OS, compiler, libraries) from reusable
software .As a matter of fact, while most semiconductor companies have mature
hardware reuse methodologies, the majority of them have not yet reused software
components. Embedded software design can easily take more resources than
hardware design. Furthermore, to sell silicon in today_s business environment,
semiconductor companies must minimize risk and shorten time-to-market for their
customers. There is another development: IP libraries must include behavioral model
descriptions (e.g., SystemC) so that the entire hardware/software codesign can be
simulated and verified at the early design stage.
There are many emerging steering factors behind the modern design trend.
Among them, the ones that have significantly influenced SoC designs are listed
below:
As more transistors are integrated into a single chip, the chip consumes
more power. However, it is getting harder to deliver more power to a
single chip. Thus, it is important not only to build a system with the
highest performance, but also to deliver the performance with the lowest
power consumption. Low-power design is critical both to battery-powered
devices (because we want our handheld or portable devices to operate
longer) and to line-powered equipment (because power dissipation
strongly influences the packaging/cooling cost and the reliability of the
chip). In addition to high performance, power consumption is a key design
concern. There are two causes of increasing power consumption. First,
because the power dissipation per transistor is not falling at the rate that
gate density is increasing, the power density of future SoCs is set to
increase. Thus, we must reduce overall system power consumption by
using system architecture design rather than relying on process technology
alone. Second, architects used to increase frequency (burn more power) in
order to achieve better performance. For example, as shown in Fig. 5,
from the 1980s to the late 1990s, the power consumption of Intel_s
microprocessors closely follows the trend of Moore_s Law, doubling
every 2 or 3 years. Nonetheless, as power consumption approached the
limits of sustainability, architects were forced to take a different direction.
As a result, the latest Intel\ Corei 2 Duo processors have lower thermal
design power than Intel\ Pentium\ 4 processors. Similarly, SoC architects
have to pay more attention to design choices
3.2. MEMORY BANDWIDTH AND LATENCY
Although computational speeds can improve at a rate of 50% per year, time to
access off-chip memory is not improving at the same rate. DRAM latencies and
bandwidths improve at only 7 and 20%, respectively, as shown in Fig. 6. This
increasing gap between processor and memory speeds is a wellknown problem,
named the memory wall. In order to feed the computational engine, SoC architects
have to take action, such as, integrating embedded memory into the same chip (e.g.,
Section 5.4), or exploiting data access localities at the algorithm or software level.
In response to the emerging issues behind the design of SoCs discussed in Section
3, several new design strategies (discussed below) have arisen as the most
prominent and promising solutions: these characterize the new paradigm of
modern SoC systems.
Traditionally, 70% of time and energy in chip design cycles is spent on verification
Typically, when Trend and Challenge on System-on-a-Chip Designs 223 there is a small
change in a component, we need to re-verify timing for the entire chip design. One way
to avoid that is to create clear boundaries and routing channels between the components.
In this case, changes in one block do not affect the timing of others. For example,
mesochronous clocking is a way to separate different logic blocks from one. As we
integrate a billion transistors onto a single chip, it takes much more time to test the chip
as a whole (verify all the state machines and logic blocks in a design). Furthermore, as
we are likely to see increasing variability in the behavior of the transistors statically and
dynamically, built-in self-tests in each logic block become essential. Additionally, if the
IPs integrated onto a chip come from more than one source, IP providers and SoC
integrators must work closely together to define effective test strategies. Each IP block
must have a wrapper so that it can be isolated from other parts of the system while it is
being tested.
CHAPTER 5
In the future, the new generation of SoC architecture must meet more open and
challenging design issues, as exemplified below:
5.2. IP INTEGRATION
Integration requires more than simply placing components spatially together on a single
chip. A few issues, for example, are outlined below: & How to integrate analog IP
safely; in particular, how to deal with noise from the analog domain to the digital
domain or vice versa. & How to deal with black-box IP. & How to handle its I/O
requests in a timely manner without over-provisioning resources for it. & How to
migrate IPs from one process technology to the next one as quickly as possible. Is
synthesizable soft IP better than customized hard IP even though customized hard IP
may be more efficient? & How to effectively test and verify the whole system when the
IPs come from different sources. These are the challenges beyond simply placing
components together. We need good strategies for the integration of hardware and
software IP components .
5.3. NETWORK-ON-CHIP
As external memory bandwidth becomes a major bottleneck in the future, more on-die
high-speed memory, such as cache or local buffer, will be deployed. Sometimes,
embedded memory (SRAM, DRAM, flash, ROM) will be integrated onto the chip. The
amount of memory integrated into an ASIC-type SoC design has increased from 20% in
1999 to 70% in 2005. Challenges arise when trying to balance efficiency and power.
SRAM provides high performance, while flash memory is the best solution in terms of
power consumption. The amount and the placement of each kind of memory in the SoC
will greatly affect access efficiency and power. Additionally, cache may introduce
indeterminate delay, cache coherence, and memory consistency challenges. Therefore,
the unpredictable latencies associated with caches must be carefully accounted for. An
alternative solution is to use a softwarecontrol local buffer instead of cache. A famous
commercial example of this is the Cell architecture developed by Sony, Toshiba, and
IBM . A software-control local buffer reduces the uncertainty when it comes to latency,
but of its use makes the software more complex. Furthermore, because digital, mixed-
signal, RF, and memory blocks are tightly integrated4 , the power and substrate noise
may cause sensitive blocks to suffer from functional failures. Because of the unique
characteristics of the memory circuit and layout, the power and substrate noise may
cause functional failures. Making embedded memories noise-tolerant will be vital to a
successful SoC design. 3D stacking memory is an alternative exciting technology to
increase bandwidth substantially. However, despite its promising advantages, the
thermal issue of multiple dies stacking together is of serious concern. In short, a high-
performance and low-power SoC architecture must balance the tradeoffs from various
memory options.
5.5. RELIABILITY
VERIFICATION TRENDS
Over the last few years, Foster led several industry studies to identify broad trends in
verification . One can make the following critical observations from these studies:
Verification represents bulk of the effort in the system design, incurring on an average the
cost of about 57% of the total project time. There has been a discernible increase in the
number of projects where verification incurred the cost of over 80% of the project time.
Most designs show an increase in the use of emulation and FPGA models, with more
usage of these technolo- gies the more complex the design. This is consistent withthe need
for a fast prototyping environment particularly for complex SoCs, and also perhaps
emphasizes the role of software in modern systems (which require emulation/FPGA
Prototyping for validation).
1) Most successful designs are productized after an average of two silicon spins.
Note that for a hardware/software system this translates to one spin for
catching all hardware problems and another for all the software interaction
issues. This underlines the critical role of pre-silicon verification to ensure that
there are no critical gating issues during post-silicon validation.
2) There has been a significant increase in the use of both simulation-based
verification and targeted formal verification activities.
The last point above deserves qualification. In particular, recall that both
simulation and formal verification techniques are falling short of the scalability
requirements of modern computing devices. How then are they being having
increasing adoption.
The answer lies in transformative changes that have been occurring in these technologies
in the recent years. Rather than focusing on full functional coverage, they are getting
targeted towards critical design features. Current research trends include verification of
specific emerging application domains such as automotive and security and using data
analytics to improve verification efficiency. In the remainder of this section, we dive a
little deeper into how these technologies have been changing to adapt themselves to the
increasing demand as well as to address the scalability gap.
6.1Simulation Technologies:
Depending on the portion of analog and mixed signal circuits on chip, the
testbench architecture for AMS simulation can be divided into two categories: ”analog
on top” (top-level models are analog with digital modules inside) or ”digital on top” (top-
level models are digital with analog modules inside). The latter is more commonly used
and the proper modeling of analog behavior is critical to ”digital on top” mixed signal
chip verification. Analog models of different abstraction levels are used through the
project life cycle, with consideration of the trade-off between simulation speed and
accuracy. For example, Verilog-AMS provides four abstraction levels to model analog
behaviors. To support AMS verification, the simulator must have the performance and
capacity to simulate a mixture of models at different abstraction levels for today’s
increasingly large designs in a reasonable amount of time, while maintaining an
acceptable level of accuracy. It is not uncommon that there are nested digital and analog
blocks in complex SOC designs, which should also be supported by the simulator. In
addition, the co-existence of models at various abstraction levels creates complexity in
verification planning as the models can be mixed and matched for achieving different
verification goals.
CONCLUSION
Nowadays, with greater device integration, SoC designs can implement high-
performance and inexpensive systems for many killer applications. However, system
designs have also become more Trend and Challenge on System-on-a-Chip Designs
complex. This paper surveyed the emerging issues, modern design trends, and future
system design challenges for SoC research and design. It goes without saying that more
research efforts are still required to create innovative solutions. The SoC architecture
must consider overall system performance, flexibility, and scalability, power/thermal
management, system partition (among digital, analog, on-chip, or off-chip), architecture
partition (between hardware and software), algorithm developments for emerging
applications, and so on. Moreover, the coverage in this article is far from
comprehensive. Some important topics, such as the integration of electro-optical
devices, and the integration of nanoscale physical, chemical, or biological sensors, are
regretfully omitted. Fortunately, there is much discussion on many of these subjects in
the literature-the remaining portions of this special issue are a good place to start.
CHAPTER 8
REFERENCE