
Solarflare
Server
Adapter
User
Guide
Solarflare
Adapters
on
Linux
Issue
20
©
Solarflare
Communications
2017
114
This
output
shows
that
there
are
four
channels
(rows)
set
up
between
four
CPUs
(columns).
2
Determine
the
CPUs
to
which
these
interrupts
are
assigned
to:
#
cat
/proc/irq/123/smp_affinity
00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
#
cat
/proc/irq/131/smp_affinity
00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000002
#
cat
/proc/irq/139/smp_affinity
00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000004
#
cat
/proc/irq/147/smp_affinity
00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000008
This
shows
that
RXQ[0]
is
affinitized
to
CPU[0]
,
RXQ[1]
is
affinitized
to
CPU[1]
,
and
so
on.
With
this
configuration,
the
latency
and
CPU
utilization
for
a
particular
TCP
flow
will
be
Dependant
on
that
flow’s
RSS
hash,
and
which
CPU
that
hash
resolves
onto.
NOTE:
Interrupt
line
numbers
and
their
initial
CPU
affinity
are
not
guaranteed
to
be
the
same
across
reboots
and
driver
reloads.
Typically,
it
is
therefore
necessary
to
write
a
script
to
query
these
values
and
apply
the
affinity
accordingly.
3
Set
all
network
interface
interrupts
to
a
single
CPU
(in
this
case
CPU[0]
):
#
echo
1
>
/proc/irq/123/smp_affinity
#
echo
1
>
/proc/irq/131/smp_affinity
#
echo
1
>
/proc/irq/139/smp_affinity
#
echo
1
>
/proc/irq/147/smp_affinity
NOTE:
The
read
‐
back
of
/proc/irq/N/smp_affinity
will
return
the
old
value
until
a
new
interrupt
arrives.
4
Set
the
application
to
run
on
the
same
CPU
(in
this
case
CPU[0]
)
as
the
network
interface’s
interrupts:
#
taskset
1
netperf
#
taskset
1
netperf
‐
H
<host>
NOTE:
The
use
of
taskset
is
typically
only
suitable
for
affinity
tuning
single
threaded,
single
traffic
flow
applications.
For
a
multi
threaded
application,
whose
threads
for
example
process
a
subset
of
receive
traffic,
taskset
is
not
suitable.
In
such
applications,
it
is
desirable
to
use
RSS
and
Interrupt
affinity
to
spread
receive
traffic
over
more
than
one
CPU
and
then
have
each
receive
thread
bind
to
each
of
the
respective
CPUs.
Thread
affinities
can
be
set
inside
the
application
with
the
shed_setaffinity()
function
(see
Linux
man
pages).
Use
of
this
call
and
how
a
particular
application
can
be
tuned
is
beyond
the
scope
of
this
guide.
If
the
settings
have
been
correctly
applied,
all
interrupts
from
eth0
are
being
handled
on
CPU[0]
.
This
can
be
checked:
#
cat
/proc/interrupts
|
grep
eth0
‐
123:
13302
0
0
0
PCI
‐
MSI
‐
X
eth0
‐
0
131:
0
24
0
0
PCI
‐
MSI
‐
X
eth0
‐
1
139:
0
0
32
0
PCI
‐
MSI
‐
X
eth0
‐
2
147:
0
0
0
21
PCI
‐
MSI
‐
X
eth0
‐
3