Site Loader
Rock Street, San Francisco


threading (called as hyper-threading technology) is Intel’s trademark simultaneous
multi-threading (SMT) implementation to improve computational parallelization
done on x86 microprocessors.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

HISTORY: The concept behind this technology was patented by SUN
MICROSYSTEMS. It was first introduced in 2002 in certain Intel’s Pentium 4 and
Xeon processors, however it was discontinued because the processors were unable
to support it and it was afterwards integrated in Nehalem architecture of
processor which is the basis for all current core i3, i5, i7 processors. 

OS REQUIREMENTS: The operating system utilizing hyper-threading must
be capable of simultaneous multiprocessing (i.e. the capability of dividing
work load among multiple processors), it should have SMT support and should be
specifically optimized for it, however the technology is transparent to the
operating system and program.

FUNCTIONALITY: It allows a single microprocessor to act as two
separate microprocessors to the operating system i.e. for each physical core
the operating system addresses two logical (virtual) processors and shares the
work load between them, each of which can be individually interrupted, directed
to execute a thread, halted. It uses the processor’s resources (both the
logical processors share the same resources such that when one of the logical
processors is stalled the other can borrow its resources) more efficiently by
enabling multiple threads to run at a time. The main function of it is to
increase the number of independent instructions in a pipeline. With
hyper-threading, the physical processor can execute more than one concurrent
streams of instructions sent by the operating system, allowing more work to be
done in each clock cycle by the processor. Hyper-threading is efficiently
applicable in those situations where multi-tasking with heavy software’s (such
as Maya, Unreal Engine, 3d editing, gaming etc.) is required. In such cases,
the tasks are scheduled in such a way that there is no idle time on the
processor, it can also push light software’s all in one processor and the heavy
software’s in others.

BENEFITS: One of the features of this technology include increase in
processors throughput and an improvement in performance of threaded software’s.
One can run demanding applications along with maintenance of systems responsiveness.
It allows us to do intensive graphics without compromise.  According to Intel, the first hyper-threaded
processor used 5% more die area than a non-threading one but the performance
boost was of about 15-30%, however in general performance boost is most
application dependent. 

DRAW-BACKS: Hyper-threading is inefficient in situations where tasks
are being performed sequentially i.e. in a serial manner in the beginning, the
operating systems were not optimized for hyper-threading.  Hyper-threading was also criticized for
increase in heat output and power consumption. In 2017, it was revealed that
some of Intel’s processors had a bug with their hyper-threading implementation
which could cause data loss.                                                                                


    Windows can
schedule two different threads on one physical core at the same time if the
physical core is working on the first task but still has some resources free it
can go ahead and use its remaining resources to process the second task so you
won’t always have a clean doubling or halving of performance if you’re running
one thing versus two but you will see substantial performance differences if
you’re leveraging the hyper threading feature on a physical core but there is
no real distinction to draw between the logical cores on a hyper-threading cpu,
they are all the same thing both physically and how a hyper threading aware OS
like Windows 10 sees them.






In computer
systems, many a times data is loaded in registers very infrequently, yet the
clock signal toggles after every clock cycle, the clock signal often drives a
large capacitive load leading to dynamic power consumption. Clock gating adds
more logic to prune the clock tree which disables portions of logic circuitry
not in use at that time so that the flip flops in it do not have to change
states since power consumption is mostly due to the switching of states of flip
flops, one of the main advantages of clock gating are reduced die area as lock
gating logic removes a large number of muxes from the circuitry. In
semiconductor microelectronics, clock gating is a power-saving technique used
for switching ON and OFF of circuits. In literal meanings Clock gating is a
technique of saving power by stopping clock to a logical unit when it is not
operating   Clock gating is efficiently
used by many devices to turn off buses, controllers, bridges, and parts of
processor, to reduce power consumption.

Clock gating
can be done in two ways, either by software switching of power states per
instructions or through smart hardware that determines whether that specific
circuitry is further required, if not, then switches it OFF, or a combination
of both. It works by grouping circuits into logical blocks, such that when no
work is being done, they are shut off. Power consumption in asynchronous
circuits is usually data-dependent as they by definition do not have a “clock”.
Clock gating can be done by using an AND gate with negative level sensitive
latch or with OR gate along with positive level sensitive latch, the benefits
of which are zero cycle hold check and no additional latency , this idea gave
rise to Integrated Clock Gating Cell. Integrated Clock Gating Cell is of two
types i.e. AND type ICGC and OR type ICGC.

AND type ICGC:

It has an AND gate preceded by negative level
sensitive latch. The enable and test-enable are set to active high, the
clock-out however has an inactive low state.

OR type ICGC:

It has an OR gate preceded by positive level
sensitive latch. The enable and test-enable are set to active low, the
clock-out however has an inactive high state.

During the
shift in scan testing, all the clock control functions have to be bypassed to
let the shift happen, in ICGC we have test-enable input for that matter i.e.
whenever the design has to go in shift mode, the test-enable signal goes high,
thereby bypassing all the functional enable signals



Bridge is the code name of third generation Intel core processors. These CPU’s
part numbers begin with 3. Ivy Bridge uses some newer technologies. In order to
achieve the reduction in Ivy Bridge die size, Intel developed a new kind of
three-dimensional “Tri-Gate” transistor Mass production of Ivy Bridge chips
began in the third quarter of 2011. Dual-core mobile models were launched on 31st
May 2012 whereas quad-core mobile models were launched on 29th April
2012. However core i3 processors were available in market by September 2012.
The socket used for Intel microprocessors based on ivy bridge microarchitecture
is called socket H2 LGA 1155. Ivy bridge microarchitecture somewhat resembles
its predecessor Sandy bridge but has more processing power and slightly less
physical size. Almost all ivy bridge chips are quad-core (except for economy
version), having clock speeds ranging from 2.5 to 3.5 GHz (giga bytes) along
with a cache size of 6 to 8 MBs (megabytes). The ivy bridge processors employ a
22nm architecture that is drop in physical size approximately one-third
relative to previous chips. They are also backward compatible with sandy bridge
processors. The advantages of ivy bridge microarchitecture are (1) support for
PCI Express3.0 and DDR3L memory (2) enhanced security features (3) replacement
of DirectX 10.1 with DirectX11 capabilities (4) smaller microprocessor yielding
space for integrated graphics chip hence improving display performance, up to
three displays are supported (5) Multiple 4K video playback (6) A 14-19 stage
instruction pipeline (7) the built-in GPU has 6-16 execution units.


increased the throughput of the floating-point and integer dividers in Ivy
Bridge as opposed to Sandy Bridge so when you’re doing floating-point
operations they have twice the throughput as sandy bridge which means that
floating point calculations should be able to go

through more quickly.




The next
thing that’s closely related to process manufacturing is the thermal design
power or TDP of the processor. TDP is a measure of how much cooling you’re
going to have to put into the chip so what this means is that Intel guarantees
the manufacturers that if it dissipates a certain amount of heat the CPU will
operate as intended for example if your processor has a 35 watt TDP that means
the manufacturer has to ensure that they can dissipate 35 watts of heat, if you
go over that thermal limit and if you don’t dissipate 35 watts of heat the
processor is not going to be able to run at full speed so the lower the TDP the
less design effort that’s required to cool the chip down to make sure that you
operate as fast as possible



Ivy bridge
has another feature called power-aware interrupt routing and what that
basically means is that if there are two out of four cores active what it does
is that it looks at the instruction stream and says which core should I send
this to that are active so it’s only going to send instructions to the active
cores instead of like rotting it to inactive cores in this way we don’t have to
wake up inactive cores and just keep it through cores that are currently
active, hence a lot of energy will be saved.




Ivy bridges temperatures are reported to be ten degree
Celsius higher than sandy bridge.



Sandy Bridge is the code name of second generation Intel
core processors. These CPU’s part numbers begin with 2. Its first products
under the Core brand were released in January 2011. It was developed primarily
by the Israeli branch of intel codenamed Gesher (“bridge” in herbrew). Sandy
bridge implements 32nm manufacturing process. .they generally comes up in third
generation of Intel core processors. They have launched it after their Nehalem
series of processors

Architecture of sandy bridge consists of more than one cores
that increase the speed of processing and execution of programs using the
terminology known as hyper threading.

This hyper threading is basically a technology that is
adopted by Intel company in their computers where a single microprocessor act
like two separate processors for the operating systems and all the console
applications. With this hyper threading core of a microprocessors execute
programs simultaneously.

They are generally managed by clock simply
involves the division of the load of work by the group of cores among
themselves in Intel computer architecture.

Now talking about the internal consistency of  sandy bridge, it consist of about 2 billion
double gate transistors on each of the 
processor with having eight to nine cores on a single board.

Sandy bridge is common in models of Intel computers like
Core i3, Core i5 and Core i7. Each of these models have processors with maximum
clock speed of 2 G Hz. They have the ability of video decoding and the encoding
them too. It contains all graphic cores and cards on a single chip.

Intel had manufactured sandy bridge using die fabrication
technology. Sandy bridge processors are built for laptops, workstations,
desktop and other server computers working in huge institution.

The most advanced and up to date sandy bridge has four cores
making it quad processor.

Each core has two level of cache line memory. It is a big
advantage for the processor while executing it can fetch and store data within
itself thus reducing latency as compared to other models of computers. They
also have third level of cache line memory which is utilizes for the communication
and receiving instructions.

Post Author: admin


I'm Lena!

Would you like to get a custom essay? How about receiving a customized one?

Check it out