Tensor Multiplication With Numpy Tensordot
Solution 1:
There are a couple of ways you could do this. The first thing that comes to mind is np.einsum
:
# some fake data
gen = np.random.RandomState(0)
ni, nj, nk = 10, 20, 100
U = gen.randn(ni, nj, nk)
V = gen.randn(nj, nk)
res1 = np.zeros((ni, nk))
for k in range(nk):
res1[:,k] = U[:,:,k].dot(V[:,k])
res2 = np.einsum('ijk,jk->ik', U, V)
print(np.allclose(res1, res2))
# True
np.einsum
uses Einstein notation to express tensor contractions. In the expression 'ijk,jk->ik'
above, i
,j
and k
are subscripts that correspond to the different dimensions of U
and V
. Each comma-separated grouping corresponds to one of the operands passed to np.einsum
(in this case U
has dimensions ijk
and V
has dimensions jk
). The '->ik'
part specifies the dimensions of the output array. Any dimensions with subscripts that aren't present in the output string are summed over.
np.einsum
is incredibly useful for performing complex tensor contractions, but it can take a while to fully wrap your head around how it works. You should take a look at the examples in the documentation (linked above).
Some other options:
Element-wise multiplication with broadcasting, followed by summation:
res3 = (U * V[None, ...]).sum(1)
inner1d
with a load of transposing:from numpy.core.umath_tests import inner1d res4 = inner1d(U.transpose(0, 2, 1), V.T)
Some benchmarks:
In [1]: ni, nj, nk = 100, 200, 1000
In [2]: %%timeit U = gen.randn(ni, nj, nk); V = gen.randn(nj, nk)
....: np.einsum('ijk,jk->ik', U, V)
....:
10 loops, best of 3: 23.4 ms per loop
In [3]: %%timeit U = gen.randn(ni, nj, nk); V = gen.randn(nj, nk)
(U * V[None, ...]).sum(1)
....:
10 loops, best of 3: 59.7 ms per loop
In [4]: %%timeit U = gen.randn(ni, nj, nk); V = gen.randn(nj, nk)
inner1d(U.transpose(0, 2, 1), V.T)
....:
10 loops, best of 3: 45.9 ms per loop
Post a Comment for "Tensor Multiplication With Numpy Tensordot"