Berikut ini menjelaskan semantik operasi yang ditentukan dalam antarmuka XlaBuilder
. Biasanya, operasi ini memetakan operasi satu-ke-satu yang ditentukan dalam antarmuka RPC di xla_data.proto
.
Catatan tentang tata nama: tipe data umum yang ditangani XLA adalah array berdimensi N yang menampung elemen dari beberapa tipe seragam (seperti float 32-bit). Sepanjang dokumentasi, array digunakan untuk menunjukkan array berdimensi arbitrer. Untuk kenyamanan, kasus khusus memiliki nama yang lebih spesifik dan familiar; misalnya vektor adalah array 1 dimensi dan matriks adalah array 2 dimensi.
Lagipula
Lihat juga XlaBuilder::AfterAll
.
AfterAll mengambil sejumlah token yang bervariasi dan menghasilkan satu token. Token adalah tipe primitif yang dapat dijalin di antara operasi dengan efek samping untuk menegakkan pemesanan. AfterAll
dapat digunakan sebagai gabungan token untuk memesan operasi setelah operasi yang ditetapkan.
AfterAll(operands)
Argumen | Jenis | Semantik |
---|---|---|
operands | XlaOp | jumlah token yang bervariasi |
Semua Berkumpul
Lihat juga XlaBuilder::AllGather
.
Melakukan penggabungan di seluruh replika.
AllGather(operand, all_gather_dim, shard_count, replica_group_ids, channel_id)
Argumen | Jenis | Semantik |
---|---|---|
operand | XlaOp | Array untuk digabungkan di seluruh replika. |
all_gather_dim | int64 | Dimensi penggabungan. |
shard_count | int64 | Ukuran setiap kelompok replika. |
replica_groups | vektor dari vektor int64 | Grup di mana penggabungan dilakukan. |
channel_id | opsional int64 | ID saluran opsional untuk komunikasi lintas modul. |
-
replica_groups
adalah daftar grup replika tempat penggabungan dilakukan (id replika untuk replika saat ini dapat diambil menggunakanReplicaId
). Urutan replika di setiap grup menentukan urutan penempatan masukannya dalam hasil.replica_groups
harus kosong (dalam hal ini semua replika termasuk dalam satu grup, diurutkan dari0
hinggaN - 1
), atau berisi jumlah elemen yang sama dengan jumlah replika. Misalnya,replica_groups = {0, 2}, {1, 3}
melakukan penggabungan antara replika0
dan2
, serta1
dan3
. -
shard_count
adalah ukuran setiap grup replika. Kami memerlukan ini jikareplica_groups
kosong. -
channel_id
digunakan untuk komunikasi lintas modul: hanya operasiall-gather
denganchannel_id
yang sama yang dapat berkomunikasi satu sama lain.
Bentuk keluarannya adalah bentuk masukan dengan all_gather_dim
yang dibuat shard_count
kali lebih besar. Misalnya, jika ada dua replika dan operan memiliki nilai [1.0, 2.5]
dan [3.0, 5.25]
masing-masing pada dua replika tersebut, maka nilai keluaran dari operasi ini di mana all_gather_dim
adalah 0
akan menjadi [1.0, 2.5, 3.0, 5.25]
pada kedua replika.
Semua Kurangi
Lihat juga XlaBuilder::AllReduce
.
Melakukan komputasi khusus di seluruh replika.
AllReduce(operand, computation, replica_group_ids, channel_id)
Argumen | Jenis | Semantik |
---|---|---|
operand | XlaOp | Array atau tupel array yang tidak kosong untuk dikurangi di seluruh replika. |
computation | XlaComputation | Perhitungan reduksi |
replica_groups | vektor dari vektor int64 | Kelompok di mana reduksi dilakukan |
channel_id | opsional int64 | ID saluran opsional untuk komunikasi lintas modul |
- Ketika
operand
adalah tupel dari array, pengurangan semua dilakukan pada setiap elemen tupel. -
replica_groups
adalah daftar grup replika tempat pengurangan dilakukan (id replika untuk replika saat ini dapat diambil menggunakanReplicaId
).replica_groups
harus kosong (dalam hal ini semua replika termasuk dalam satu grup), atau berisi jumlah elemen yang sama dengan jumlah replika. Misalnya,replica_groups = {0, 2}, {1, 3}
melakukan reduksi antara replika0
dan2
, serta1
dan3
. -
channel_id
digunakan untuk komunikasi lintas modul: hanya operasiall-reduce
denganchannel_id
yang sama yang dapat berkomunikasi satu sama lain.
Bentuk keluaran sama dengan bentuk masukan. Misalnya, jika ada dua replika dan operan memiliki nilai [1.0, 2.5]
dan [3.0, 5.25]
masing-masing pada dua replika tersebut, maka nilai keluaran dari perhitungan operasi dan penjumlahan ini akan menjadi [4.0, 7.75]
pada keduanya replika. Jika masukannya berupa tupel, maka keluarannya juga berupa tupel.
Menghitung hasil AllReduce
memerlukan satu masukan dari setiap replika, jadi jika satu replika mengeksekusi simpul AllReduce
lebih sering daripada yang lain, maka replika sebelumnya akan menunggu selamanya. Karena semua replika menjalankan program yang sama, tidak banyak cara untuk mewujudkannya, namun ada kemungkinan ketika kondisi perulangan while bergantung pada data dari infeed dan data yang dimasukkan menyebabkan perulangan while berulang kali lebih sering pada satu replika dari yang lain.
Semua Ke Semua
Lihat juga XlaBuilder::AllToAll
.
AllToAll adalah operasi kolektif yang mengirimkan data dari semua inti ke semua inti. Ini memiliki dua fase:
- Fase pencar. Pada setiap inti, operan dipecah menjadi
split_count
sejumlah blok di sepanjangsplit_dimensions
, dan blok tersebut tersebar ke semua inti, misalnya, blok ke-i dikirim ke inti ke-i. - Fase berkumpul. Setiap inti menggabungkan blok yang diterima di sepanjang
concat_dimension
.
Inti yang berpartisipasi dapat dikonfigurasi dengan:
-
replica_groups
: setiap ReplicaGroup berisi daftar id replika yang berpartisipasi dalam komputasi (id replika untuk replika saat ini dapat diambil menggunakanReplicaId
). AllToAll akan diterapkan dalam subgrup dalam urutan yang ditentukan. Misalnya,replica_groups = { {1,2,3}, {4,5,0} }
berarti AllToAll akan diterapkan dalam replika{1, 2, 3}
, dan dalam fase pengumpulan, dan blok yang diterima akan digabungkan dalam urutan yang sama 1, 2, 3. Kemudian, AllToAll lain akan diterapkan dalam replika 4, 5, 0, dan urutan penggabungan juga 4, 5, 0. Jikareplica_groups
kosong, semua replika menjadi milik satu kelompok, dalam urutan rangkaian kemunculannya.
Prasyarat:
- Ukuran dimensi operan pada
split_dimension
habis dibagisplit_count
. - Bentuk operannya bukan tupel.
AllToAll(operand, split_dimension, concat_dimension, split_count, replica_groups)
Argumen | Jenis | Semantik |
---|---|---|
operand | XlaOp | n array masukan dimensi |
split_dimension | int64 | Nilai dalam interval [0, n) yang memberi nama dimensi sepanjang operan dipecah |
concat_dimension | int64 | nilai dalam interval [0, n) yang memberi nama dimensi sepanjang blok-blok terpisah digabungkan |
split_count | int64 | jumlah inti yang berpartisipasi dalam operasi ini. Jika replica_groups kosong, ini adalah jumlah replika; jika tidak, jumlah ini harus sama dengan jumlah replika di setiap kelompok. |
replica_groups | Vektor ReplicaGroup | setiap grup berisi daftar id replika. |
Di bawah ini menunjukkan contoh Alltoall.
XlaBuilder b("alltoall");
auto x = Parameter(&b, 0, ShapeUtil::MakeShape(F32, {4, 16}), "x");
AllToAll(x, /*split_dimension=*/1, /*concat_dimension=*/0, /*split_count=*/4);

Dalam contoh ini, ada 4 core yang berpartisipasi dalam Alltoall. Pada setiap inti operan dipecah menjadi 4 bagian sepanjang dimensi 1, sehingga setiap bagian berbentuk f32[4,4]. Keempat bagian tersebut tersebar ke seluruh inti. Kemudian setiap inti menggabungkan bagian-bagian yang diterima sepanjang dimensi 0, dalam urutan atau inti 0-4. Jadi output pada setiap inti berbentuk f32[16,4].
BatchNormGrad
Lihat juga XlaBuilder::BatchNormGrad
dan makalah normalisasi batch asli untuk penjelasan rinci tentang algoritme.
Menghitung gradien norma batch.
BatchNormGrad(operand, scale, mean, variance, grad_output, epsilon, feature_index)
Argumen | Jenis | Semantik |
---|---|---|
operand | XlaOp | n array dimensi yang akan dinormalisasi (x) |
scale | XlaOp | Array 1 dimensi (\(\gamma\)) |
mean | XlaOp | Array 1 dimensi (\(\mu\)) |
variance | XlaOp | Array 1 dimensi (\(\sigma^2\)) |
grad_output | XlaOp | Gradien diteruskan ke BatchNormTraining (\( \nabla y\)) |
epsilon | float | Nilai epsilon (\(\epsilon\)) |
feature_index | int64 | Indeks ke dimensi fitur dalam operand |
Untuk setiap fitur dalam dimensi fitur ( feature_index
adalah indeks untuk dimensi fitur dalam operand
), operasi menghitung gradien sehubungan dengan operand
, offset
, dan scale
di semua dimensi lainnya. feature_index
harus berupa indeks yang valid untuk dimensi fitur dalam operand
.
Ketiga gradien ditentukan oleh rumus berikut (dengan asumsi array 4 dimensi sebagai operand
dan dengan indeks dimensi fitur l
, ukuran batch m
dan ukuran spasial w
dan h
):
\[ \begin{split} c_l&= \frac{1}{mwh}\sum_{i=1}^m\sum_{j=1}^w\sum_{k=1}^h \left( \nabla y_{ijkl} \frac{x_{ijkl} - \mu_l}{\sigma^2_l+\epsilon} \right) \\\\ d_l&= \frac{1}{mwh}\sum_{i=1}^m\sum_{j=1}^w\sum_{k=1}^h \nabla y_{ijkl} \\\\ \nabla x_{ijkl} &= \frac{\gamma_{l} }{\sqrt{\sigma^2_{l}+\epsilon} } \left( \nabla y_{ijkl} - d_l - c_l (x_{ijkl} - \mu_{l}) \right) \\\\ \nabla \gamma_l &= \sum_{i=1}^m\sum_{j=1}^w\sum_{k=1}^h \left( \nabla y_{ijkl} \frac{x_{ijkl} - \mu_l}{\sqrt{\sigma^2_{l}+\epsilon} } \right) \\\\\ \nabla \beta_l &= \sum_{i=1}^m\sum_{j=1}^w\sum_{k=1}^h \nabla y_{ijkl} \end{split} \]
mean
dan variance
masukan mewakili nilai momen di seluruh dimensi batch dan spasial.
Tipe keluarannya adalah tupel dari tiga pegangan:
Keluaran | Jenis | Semantik |
---|---|---|
grad_operand | XlaOp | gradien sehubungan dengan operand input (\( \nabla x\)) |
grad_scale | XlaOp | gradien sehubungan dengan scale input (\( \nabla \gamma\)) |
grad_offset | XlaOp | gradien sehubungan dengan offset masukan (\( \nabla \beta\)) |
Inferensi BatchNorm
Lihat juga XlaBuilder::BatchNormInference
dan makalah normalisasi batch asli untuk penjelasan rinci tentang algoritme.
Menormalkan array di seluruh dimensi batch dan spasial.
BatchNormInference(operand, scale, offset, mean, variance, epsilon, feature_index)
Argumen | Jenis | Semantik |
---|---|---|
operand | XlaOp | n array dimensi untuk dinormalisasi |
scale | XlaOp | larik 1 dimensi |
offset | XlaOp | larik 1 dimensi |
mean | XlaOp | larik 1 dimensi |
variance | XlaOp | larik 1 dimensi |
epsilon | float | nilai epsilon |
feature_index | int64 | Indeks ke dimensi fitur dalam operand |
Untuk setiap fitur dalam dimensi fitur ( feature_index
adalah indeks untuk dimensi fitur dalam operand
), operasi menghitung mean dan varians di seluruh dimensi lainnya dan menggunakan mean dan varians untuk menormalkan setiap elemen dalam operand
. feature_index
harus berupa indeks yang valid untuk dimensi fitur dalam operand
.
BatchNormInference
setara dengan memanggil BatchNormTraining
tanpa menghitung mean
dan variance
untuk setiap batch. Ini menggunakan mean
masukan dan variance
sebagai nilai perkiraan. Tujuan dari operasi ini adalah untuk mengurangi latensi dalam inferensi, oleh karena itu dinamakan BatchNormInference
.
Outputnya adalah array ternormalisasi berdimensi n dengan bentuk yang sama dengan operand
input.
Pelatihan BatchNorm
Lihat juga XlaBuilder::BatchNormTraining
dan the original batch normalization paper
untuk penjelasan rinci tentang algoritme.
Menormalkan array di seluruh dimensi batch dan spasial.
BatchNormTraining(operand, scale, offset, epsilon, feature_index)
Argumen | Jenis | Semantik |
---|---|---|
operand | XlaOp | n array dimensi yang akan dinormalisasi (x) |
scale | XlaOp | Array 1 dimensi (\(\gamma\)) |
offset | XlaOp | Array 1 dimensi (\(\beta\)) |
epsilon | float | Nilai epsilon (\(\epsilon\)) |
feature_index | int64 | Indeks ke dimensi fitur dalam operand |
Untuk setiap fitur dalam dimensi fitur ( feature_index
adalah indeks untuk dimensi fitur dalam operand
), operasi menghitung mean dan varians di seluruh dimensi lainnya dan menggunakan mean dan varians untuk menormalkan setiap elemen dalam operand
. feature_index
harus berupa indeks yang valid untuk dimensi fitur dalam operand
.
Algoritmenya berjalan sebagai berikut untuk setiap batch dalam operand
\(x\) yang berisi m
elemen dengan w
dan h
sebagai ukuran dimensi spasial (dengan asumsi operand
adalah array 4 dimensi):
Menghitung rata-rata batch \(\mu_l\) untuk setiap fitur
l
dalam dimensi fitur:\(\mu_l=\frac{1}{mwh}\sum_{i=1}^m\sum_{j=1}^w\sum_{k=1}^h x_{ijkl}\)Menghitung varians batch \(\sigma^2_l\):\(\sigma^2_l=\frac{1}{mwh}\sum_{i=1}^m\sum_{j=1}^w\sum_{k=1}^h (x_{ijkl} - \mu_l)^2\)
Menormalkan, menskalakan, dan menggeser:\(y_{ijkl}=\frac{\gamma_l(x_{ijkl}-\mu_l)}{\sqrt[2]{\sigma^2_l+\epsilon} }+\beta_l\)
Nilai epsilon, biasanya sejumlah kecil, ditambahkan untuk menghindari kesalahan pembagian dengan nol.
Tipe outputnya adalah tuple dari tiga XlaOp
s:
Keluaran | Jenis | Semantik |
---|---|---|
output | XlaOp | n array dimensi dengan bentuk yang sama dengan operand input (y) |
batch_mean | XlaOp | Array 1 dimensi (\(\mu\)) |
batch_var | XlaOp | Array 1 dimensi (\(\sigma^2\)) |
batch_mean
dan batch_var
adalah momen yang dihitung di seluruh dimensi batch dan spasial menggunakan rumus di atas.
Tipe Konversi Bitcast
Lihat juga XlaBuilder::BitcastConvertType
.
Mirip dengan tf.bitcast
di TensorFlow, melakukan operasi bitcast berdasarkan elemen dari bentuk data ke bentuk target. Ukuran input dan output harus sesuai: misalnya elemen s32
menjadi elemen f32
melalui rutin bitcast, dan satu elemen s32
akan menjadi empat elemen s8
. Bitcast diimplementasikan sebagai pemeran tingkat rendah, sehingga mesin dengan representasi floating-point berbeda akan memberikan hasil berbeda.
BitcastConvertType(operand, new_element_type)
Argumen | Jenis | Semantik |
---|---|---|
operand | XlaOp | array tipe T dengan redup D |
new_element_type | PrimitiveType | tipe U |
Dimensi operan dan bentuk target harus sesuai, kecuali dimensi terakhir yang akan berubah berdasarkan rasio ukuran primitif sebelum dan sesudah konversi.
Tipe elemen sumber dan tujuan tidak boleh berupa tupel.
Konversi Bitcast ke tipe primitif dengan lebar berbeda
Instruksi BitcastConvert
HLO mendukung kasus di mana ukuran tipe elemen keluaran T'
tidak sama dengan ukuran elemen masukan T
. Karena keseluruhan operasi secara konseptual adalah bitcast dan tidak mengubah byte yang mendasarinya, bentuk elemen keluaran harus diubah. Untuk B = sizeof(T), B' = sizeof(T')
, ada dua kemungkinan kasus.
Pertama, ketika B > B'
, bentuk keluaran mendapat dimensi paling kecil baru dengan ukuran B/B'
. Misalnya:
f16[10,2]{1,0} %output = f16[10,2]{1,0} bitcast-convert(f32[10]{0} %input)
Aturannya tetap sama untuk skalar efektif:
f16[2]{0} %output = f16[2]{0} bitcast-convert(f32[] %input)
Alternatifnya, untuk B' > B
instruksi memerlukan dimensi logis terakhir dari bentuk masukan sama dengan B'/B
, dan dimensi ini dihilangkan selama konversi:
f32[10]{0} %output = f32[10]{0} bitcast-convert(f16[10,2]{1,0} %input)
Perhatikan bahwa konversi antara bitwidth yang berbeda tidak berdasarkan elemen.
Siaran
Lihat juga XlaBuilder::Broadcast
.
Menambahkan dimensi ke array dengan menduplikasi data dalam array.
Broadcast(operand, broadcast_sizes)
Argumen | Jenis | Semantik |
---|---|---|
operand | XlaOp | Array yang akan diduplikasi |
broadcast_sizes | ArraySlice<int64> | Ukuran dimensi baru |
Dimensi baru disisipkan di sebelah kiri, yaitu jika broadcast_sizes
mempunyai nilai {a0, ..., aN}
dan bentuk operannya berdimensi {b0, ..., bM}
maka bentuk keluarannya berdimensi {a0, ..., aN, b0, ..., bM}
.
Dimensi baru mengindeks ke dalam salinan operan, mis
output[i0, ..., iN, j0, ..., jM] = operand[j0, ..., jM]
Misalnya, jika operand
adalah skalar f32
dengan nilai 2.0f
, dan broadcast_sizes
adalah {2, 3}
, maka hasilnya akan berupa array dengan bentuk f32[2, 3]
dan semua nilai pada hasilnya adalah 2.0f
.
Siaran Dalam Dim
Lihat juga XlaBuilder::BroadcastInDim
.
Memperluas ukuran dan peringkat array dengan menduplikasi data dalam array.
BroadcastInDim(operand, out_dim_size, broadcast_dimensions)
Argumen | Jenis | Semantik |
---|---|---|
operand | XlaOp | Array yang akan diduplikasi |
out_dim_size | ArraySlice<int64> | Ukuran dimensi bentuk target |
broadcast_dimensions | ArraySlice<int64> | Dimensi mana dalam bentuk target yang sesuai dengan setiap dimensi bentuk operan |
Mirip dengan Siaran, tetapi memungkinkan penambahan dimensi di mana saja dan memperluas dimensi yang ada dengan ukuran 1.
operand
disiarkan ke bentuk yang dijelaskan oleh out_dim_size
. broadcast_dimensions
memetakan dimensi operand
ke dimensi bentuk target, yaitu dimensi ke-i operan dipetakan ke dimensi ke-siaran[i] dari bentuk keluaran. Dimensi operand
harus berukuran 1 atau berukuran sama dengan dimensi dalam bentuk keluaran yang dipetakan. Dimensi yang tersisa diisi dengan dimensi ukuran 1. Penyiaran dimensi yang merosot kemudian disiarkan sepanjang dimensi yang merosot tersebut untuk mencapai bentuk keluaran. Semantiknya dijelaskan secara rinci di halaman penyiaran .
Panggilan
Lihat juga XlaBuilder::Call
.
Memanggil komputasi dengan argumen yang diberikan.
Call(computation, args...)
Argumen | Jenis | Semantik |
---|---|---|
computation | XlaComputation | komputasi tipe T_0, T_1, ..., T_{N-1} -> S dengan N parameter tipe arbitrer |
args | urutan N XlaOp s | N argumen bertipe arbitrer |
Arity dan tipe args
harus sesuai dengan parameter computation
. Tidak boleh ada args
.
Cholesky
Lihat juga XlaBuilder::Cholesky
.
Menghitung dekomposisi Cholesky dari sekumpulan matriks pasti positif simetris (Hermitian).
Cholesky(a, lower)
Argumen | Jenis | Semantik |
---|---|---|
a | XlaOp | array peringkat> 2 dari tipe kompleks atau titik mengambang. |
lower | bool | apakah akan menggunakan segitiga atas atau bawah a . |
Jika lower
bernilai true
, hitung matriks segitiga bawah l
sedemikian sehingga \( a = l
. l^T \). Jika lower
adalah false
, hitung matriks segitiga atas u
sedemikian rupa sehingga \( a = u^T . u \).
Data masukan hanya dibaca dari segitiga bawah/atas a
, bergantung pada nilai lower
. Nilai dari segitiga lainnya diabaikan. Data keluaran dikembalikan dalam segitiga yang sama; nilai-nilai dalam segitiga lainnya ditentukan oleh implementasi dan dapat berupa apa saja.
Jika pangkat a
lebih besar dari 2, a
diperlakukan sebagai kumpulan matriks, dimana semua dimensi kecuali 2 minor adalah dimensi kumpulan.
Jika a
tidak pasti positif simetris (Hermitian), hasilnya ditentukan oleh implementasi.
Penjepit
Lihat juga XlaBuilder::Clamp
.
Menjepit operan dalam rentang antara nilai minimum dan maksimum.
Clamp(min, operand, max)
Argumen | Jenis | Semantik |
---|---|---|
min | XlaOp | susunan tipe T |
operand | XlaOp | susunan tipe T |
max | XlaOp | susunan tipe T |
Mengingat operan dan nilai minimum dan maksimum, kembalikan operan jika berada dalam rentang antara minimum dan maksimum, jika tidak, kembalikan nilai minimum jika operan berada di bawah rentang ini atau nilai maksimum jika operan berada di atas rentang ini. Artinya, clamp(a, x, b) = min(max(a, x), b)
.
Ketiga array harus memiliki bentuk yang sama. Alternatifnya, sebagai bentuk penyiaran terbatas, min
dan/atau max
dapat berupa skalar bertipe T
.
Contoh dengan skalar min
dan max
:
let operand: s32[3] = {-1, 5, 9};
let min: s32 = 0;
let max: s32 = 6;
==>
Clamp(min, operand, max) = s32[3]{0, 5, 6};
Runtuh
Lihat juga XlaBuilder::Collapse
dan operasi tf.reshape
.
Meruntuhkan dimensi array menjadi satu dimensi.
Collapse(operand, dimensions)
Argumen | Jenis | Semantik |
---|---|---|
operand | XlaOp | susunan tipe T |
dimensions | vektor int64 | subset dimensi T yang berurutan dan berurutan. |
Ciutkan menggantikan subset dimensi operan tertentu dengan satu dimensi. Argumen masukannya adalah larik sembarang bertipe T dan vektor indeks dimensi yang konstan waktu kompilasi. Indeks dimensi harus merupakan subset dimensi T yang berurutan (angka dimensi rendah hingga tinggi). Jadi, {0, 1, 2}, {0, 1}, atau {1, 2} semuanya merupakan himpunan dimensi yang valid, namun {1, 0} atau {0, 2} tidak. Dimensi tersebut digantikan oleh satu dimensi baru, yang posisinya sama dalam urutan dimensi dengan dimensi yang digantikannya, dengan ukuran dimensi baru yang sama dengan hasil kali ukuran dimensi asli. Nomor dimensi terendah dalam dimensions
adalah dimensi yang paling lambat berubah (paling besar) dalam sarang loop yang meruntuhkan dimensi-dimensi ini, dan nomor dimensi tertinggi adalah yang paling cepat berubah (paling kecil). Lihat operator tf.reshape
jika diperlukan pemesanan penciutan yang lebih umum.
Misalnya, v adalah array yang terdiri dari 24 elemen:
let v = f32[4x2x3] { { {10, 11, 12}, {15, 16, 17} },
{ {20, 21, 22}, {25, 26, 27} },
{ {30, 31, 32}, {35, 36, 37} },
{ {40, 41, 42}, {45, 46, 47} } };
// Collapse to a single dimension, leaving one dimension.
let v012 = Collapse(v, {0,1,2});
then v012 == f32[24] {10, 11, 12, 15, 16, 17,
20, 21, 22, 25, 26, 27,
30, 31, 32, 35, 36, 37,
40, 41, 42, 45, 46, 47};
// Collapse the two lower dimensions, leaving two dimensions.
let v01 = Collapse(v, {0,1});
then v01 == f32[4x6] { {10, 11, 12, 15, 16, 17},
{20, 21, 22, 25, 26, 27},
{30, 31, 32, 35, 36, 37},
{40, 41, 42, 45, 46, 47} };
// Collapse the two higher dimensions, leaving two dimensions.
let v12 = Collapse(v, {1,2});
then v12 == f32[8x3] { {10, 11, 12},
{15, 16, 17},
{20, 21, 22},
{25, 26, 27},
{30, 31, 32},
{35, 36, 37},
{40, 41, 42},
{45, 46, 47} };
KolektifPermute
Lihat juga XlaBuilder::CollectivePermute
.
CollectivePermute adalah operasi kolektif yang mengirim dan menerima data lintas replika.
CollectivePermute(operand, source_target_pairs)
Argumen | Jenis | Semantik |
---|---|---|
operand | XlaOp | n array masukan dimensi |
source_target_pairs | <int64, int64> vektor | Daftar pasangan (source_replica_id, target_replica_id). Untuk setiap pasangan, operan dikirim dari replika sumber ke replika target. |
Perhatikan bahwa ada batasan berikut pada source_target_pair
:
- Dua pasangan mana pun tidak boleh memiliki id replika target yang sama, dan keduanya tidak boleh memiliki id replika sumber yang sama.
- Jika id replika bukan merupakan target pada pasangan mana pun, maka keluaran pada replika tersebut adalah tensor yang terdiri dari 0(s) dengan bentuk yang sama dengan masukan.
Menggabungkan
Lihat juga XlaBuilder::ConcatInDim
.
Concatenate menyusun array dari beberapa operan array. Array tersebut memiliki peringkat yang sama dengan masing-masing operan array input (yang harus memiliki peringkat yang sama satu sama lain) dan berisi argumen sesuai urutan yang ditentukan.
Concatenate(operands..., dimension)
Argumen | Jenis | Semantik |
---|---|---|
operands | urutan N XlaOp | N array bertipe T dengan dimensi [L0, L1, ...]. Membutuhkan N >= 1. |
dimension | int64 | Nilai dalam interval [0, N) yang memberi nama dimensi yang akan digabungkan antar operands . |
Kecuali dimension
semua dimensi harus sama. Hal ini karena XLA tidak mendukung array "ragged". Perhatikan juga bahwa nilai peringkat-0 tidak dapat digabungkan (karena tidak mungkin memberi nama dimensi sepanjang terjadinya penggabungan).
Contoh 1 dimensi:
Concat({ {2, 3}, {4, 5}, {6, 7} }, 0)
>>> {2, 3, 4, 5, 6, 7}
Contoh 2 dimensi:
let a = {
{1, 2},
{3, 4},
{5, 6},
};
let b = {
{7, 8},
};
Concat({a, b}, 0)
>>> {
{1, 2},
{3, 4},
{5, 6},
{7, 8},
}
Diagram:

Bersyarat
Lihat juga XlaBuilder::Conditional
.
Conditional(pred, true_operand, true_computation, false_operand, false_computation)
Argumen | Jenis | Semantik |
---|---|---|
pred | XlaOp | Skalar tipe PRED |
true_operand | XlaOp | Argumen tipe \(T_0\) |
true_computation | XlaComputation | XlaKomputasi tipe \(T_0 \to S\) |
false_operand | XlaOp | Argumen tipe \(T_1\) |
false_computation | XlaComputation | XlaKomputasi tipe \(T_1 \to S\) |
Jalankan true_computation
jika pred
adalah true
, false_computation
jika pred
adalah false
, dan mengembalikan hasilnya.
true_computation
harus menggunakan satu argumen bertipe \(T_0\) dan akan dipanggil dengan true_operand
yang harus bertipe sama. false_computation
harus menggunakan satu argumen bertipe \(T_1\) dan akan dipanggil dengan false_operand
yang harus bertipe sama. Jenis nilai yang dikembalikan dari true_computation
dan false_computation
harus sama.
Perhatikan bahwa hanya satu dari true_computation
dan false_computation
yang akan dieksekusi bergantung pada nilai pred
.
Conditional(branch_index, branch_computations, branch_operands)
Argumen | Jenis | Semantik |
---|---|---|
branch_index | XlaOp | Skalar tipe S32 |
branch_computations | urutan N XlaComputation | XlaKomputasi tipe \( T_0 \to S , T_1 \to S , ..., T_{N-1} \to S \) |
branch_operands | urutan N XlaOp | Argumen tipe \( T_0 , T_1 , ..., T_{N-1} \) |
Jalankan branch_computations[branch_index]
, dan kembalikan hasilnya. Jika branch_index
adalah S32
yang < 0 atau >= N, maka branch_computations[N-1]
dijalankan sebagai cabang default.
Setiap branch_computations[b]
harus menggunakan satu argumen bertipe T_b
dan akan dipanggil dengan branch_operands[b]
yang harus bertipe sama. Tipe nilai yang dikembalikan dari setiap branch_computations[b]
harus sama.
Perhatikan bahwa hanya satu dari branch_computations
yang akan dieksekusi tergantung pada nilai branch_index
.
Konv (konvolusi)
Lihat juga XlaBuilder::Conv
.
Sebagai ConvWithGeneralPadding, tetapi paddingnya ditentukan secara singkat sebagai SAMA atau VALID. Padding yang SAMA mengisi input ( lhs
) dengan angka nol sehingga output memiliki bentuk yang sama dengan input ketika tidak memperhitungkan langkahnya. Padding VALID berarti tidak ada padding.
ConvWithGeneralPadding (konvolusi)
Lihat juga XlaBuilder::ConvWithGeneralPadding
.
Menghitung konvolusi dari jenis yang digunakan dalam jaringan saraf. Di sini, konvolusi dapat dianggap sebagai jendela berdimensi n yang bergerak melintasi area dasar berdimensi n dan perhitungan dilakukan untuk setiap kemungkinan posisi jendela.
Argumen | Jenis | Semantik |
---|---|---|
lhs | XlaOp | peringkat n+2 array input |
rhs | XlaOp | peringkat n+2 array bobot kernel |
window_strides | ArraySlice<int64> | dan array langkah kernel |
padding | ArraySlice< pair<int64, int64>> | dan array padding (rendah, tinggi). |
lhs_dilation | ArraySlice<int64> | dan array faktor pelebaran lhs |
rhs_dilation | ArraySlice<int64> | dan susunan faktor dilatasi rhs |
feature_group_count | int64 | jumlah grup fitur |
batch_group_count | int64 | jumlah kelompok batch |
Misalkan n adalah banyaknya dimensi spasial. Argumen lhs
adalah array peringkat n+2 yang menggambarkan area dasar. Ini disebut input, padahal tentu saja rhs juga merupakan input. Dalam jaringan saraf, ini adalah aktivasi masukan. Dimensi n+2 adalah, dengan urutan sebagai berikut:
-
batch
: Setiap koordinat dalam dimensi ini mewakili masukan independen yang konvolusinya dilakukan. -
z/depth/features
: Setiap posisi (y,x) pada luas dasar memiliki vektor yang terkait dengannya, yang masuk ke dalam dimensi ini. -
spatial_dims
: Menjelaskann
dimensi spasial yang menentukan area dasar yang dilewati jendela.
Argumen rhs
adalah array peringkat n+2 yang mendeskripsikan filter/kernel/jendela konvolusional. Dimensinya adalah, dalam urutan ini:
-
output-z
: Dimensiz
dari output. -
input-z
: Ukuran dimensi ini dikalikanfeature_group_count
harus sama dengan ukuran dimensiz
dalam lhs. -
spatial_dims
: Menjelaskann
dimensi spasial yang menentukan jendela ke-nd yang bergerak melintasi area dasar.
Argumen window_strides
menentukan langkah jendela konvolusional dalam dimensi spasial. Misalnya, jika langkah pada dimensi spasial pertama adalah 3, maka jendela hanya dapat ditempatkan pada koordinat yang indeks spasial pertama habis dibagi 3.
Argumen padding
menentukan jumlah nol padding yang akan diterapkan pada area dasar. Jumlah padding bisa negatif -- nilai absolut padding negatif menunjukkan jumlah elemen yang akan dihapus dari dimensi tertentu sebelum melakukan konvolusi. padding[0]
menentukan padding untuk dimensi y
dan padding[1]
menentukan padding untuk dimensi x
. Setiap pasangan memiliki padding rendah sebagai elemen pertama dan padding tinggi sebagai elemen kedua. Padding rendah diterapkan ke arah indeks yang lebih rendah sedangkan padding tinggi diterapkan ke arah indeks yang lebih tinggi. Misalnya, jika padding[1]
adalah (2,3)
maka akan ada padding sebanyak 2 angka nol di kiri dan 3 angka nol di kanan pada dimensi spasial kedua. Menggunakan padding sama dengan memasukkan nilai nol yang sama ke dalam input ( lhs
) sebelum melakukan konvolusi.
Argumen lhs_dilation
dan rhs_dilation
menentukan faktor dilatasi yang akan diterapkan pada lhs dan rhs, masing-masing, di setiap dimensi spasial. Jika faktor dilatasi dalam dimensi spasial adalah d, maka lubang d-1 secara implisit ditempatkan di antara setiap entri dalam dimensi tersebut, sehingga meningkatkan ukuran larik. Lubang-lubang tersebut diisi dengan nilai no-op, yang untuk konvolusi berarti nol.
Pelebaran rhs juga disebut konvolusi atrous. Untuk detail selengkapnya, lihat tf.nn.atrous_conv2d
. Pelebaran lhs juga disebut konvolusi transposisi. Untuk detail selengkapnya, lihat tf.nn.conv2d_transpose
.
Argumen feature_group_count
(nilai default 1) dapat digunakan untuk konvolusi yang dikelompokkan. feature_group_count
harus berupa pembagi dimensi fitur masukan dan keluaran. Jika feature_group_count
lebih besar dari 1, berarti secara konseptual dimensi fitur masukan dan keluaran serta dimensi fitur keluaran rhs
dibagi rata menjadi banyak grup feature_group_count
, masing-masing grup terdiri dari fitur-fitur yang berurutan. Dimensi fitur masukan rhs
harus sama dengan dimensi fitur masukan lhs
dibagi dengan feature_group_count
(sehingga sudah memiliki ukuran sekelompok fitur masukan). Grup ke-i digunakan bersama untuk menghitung feature_group_count
banyak konvolusi terpisah. Hasil konvolusi ini digabungkan menjadi satu dalam dimensi fitur keluaran.
Untuk konvolusi mendalam, argumen feature_group_count
akan disetel ke dimensi fitur masukan, dan filter akan dibentuk ulang dari [filter_height, filter_width, in_channels, channel_multiplier]
menjadi [filter_height, filter_width, 1, in_channels * channel_multiplier]
. Untuk detail selengkapnya, lihat tf.nn.depthwise_conv2d
.
Argumen batch_group_count
(nilai default 1) dapat digunakan untuk filter yang dikelompokkan selama propagasi mundur. batch_group_count
harus berupa pembagi ukuran dimensi batch lhs
(input). Jika batch_group_count
lebih besar dari 1, itu berarti dimensi batch keluaran harus berukuran input batch / batch_group_count
. batch_group_count
harus berupa pembagi ukuran fitur keluaran.
Bentuk keluaran memiliki dimensi berikut, dengan urutan sebagai berikut:
-
batch
: Ukuran dimensi ini dikalibatch_group_count
harus sama dengan ukuran dimensibatch
dalam lhs. -
z
: Ukurannya sama denganoutput-z
pada kernel (rhs
). -
spatial_dims
: Satu nilai untuk setiap penempatan jendela konvolusional yang valid.
Gambar di atas menunjukkan cara kerja bidang batch_group_count
. Secara efektif, kami membagi setiap batch lhs menjadi grup batch_group_count
, dan melakukan hal yang sama untuk fitur output. Kemudian, untuk masing-masing grup ini kami melakukan konvolusi berpasangan dan menggabungkan keluaran sepanjang dimensi fitur keluaran. Semantik operasional semua dimensi lainnya (fitur dan spasial) tetap sama.
Penempatan jendela konvolusional yang valid ditentukan oleh langkah dan ukuran area dasar setelah padding.
Untuk menjelaskan fungsi konvolusi, pertimbangkan konvolusi 2d, dan pilih beberapa koordinat batch
tetap, z
, y
, x
dalam output. Maka (y,x)
adalah posisi sudut jendela di dalam area dasar (misalnya sudut kiri atas, tergantung bagaimana Anda menginterpretasikan dimensi spasial). Kita sekarang memiliki jendela 2d, diambil dari area dasar, dimana setiap titik 2d dikaitkan dengan vektor 1d, sehingga kita mendapatkan kotak 3d. Dari kernel konvolusional, karena kami memperbaiki koordinat keluaran z
, kami juga memiliki kotak 3d. Kedua kotak tersebut memiliki dimensi yang sama, sehingga kita dapat mengambil jumlah perkalian elemen di antara kedua kotak tersebut (mirip dengan perkalian titik). Itu adalah nilai keluarannya.
Perhatikan bahwa jika output-z
misalnya 5, maka setiap posisi jendela menghasilkan 5 nilai keluaran ke dalam dimensi z
keluaran. Nilai-nilai ini berbeda di bagian mana dari kernel konvolusional yang digunakan - ada kotak nilai 3d terpisah yang digunakan untuk setiap koordinat output-z
. Jadi Anda dapat menganggapnya sebagai 5 konvolusi terpisah dengan filter berbeda untuk masing-masing konvolusi.
Berikut adalah kode semu untuk konvolusi 2d dengan padding dan striding:
for (b, oz, oy, ox) { // output coordinates
value = 0;
for (iz, ky, kx) { // kernel coordinates and input z
iy = oy*stride_y + ky - pad_low_y;
ix = ox*stride_x + kx - pad_low_x;
if ((iy, ix) inside the base area considered without padding) {
value += input(b, iz, iy, ix) * kernel(oz, iz, ky, kx);
}
}
output(b, oz, oy, ox) = value;
}
KonversiElementType
Lihat juga XlaBuilder::ConvertElementType
.
Mirip dengan static_cast
berdasarkan elemen di C++, melakukan operasi konversi berdasarkan elemen dari bentuk data ke bentuk target. Dimensinya harus cocok, dan konversinya harus berdasarkan elemen; misalnya elemen s32
menjadi elemen f32
melalui rutin konversi s32
-ke- f32
.
ConvertElementType(operand, new_element_type)
Argumen | Jenis | Semantik |
---|---|---|
operand | XlaOp | array tipe T dengan redup D |
new_element_type | PrimitiveType | tipe U |
Dimensi operan dan bentuk target harus sesuai. Tipe elemen sumber dan tujuan tidak boleh berupa tupel.
Konversi seperti T=s32
ke U=f32
akan melakukan normalisasi konversi int-to-float seperti pembulatan ke genap terdekat.
let a: s32[3] = {0, 1, 2};
let b: f32[3] = convert(a, f32);
then b == f32[3]{0.0, 1.0, 2.0}
Jumlah Replika Silang
Melakukan AllReduce
dengan perhitungan penjumlahan.
Panggilan Khusus
Lihat juga XlaBuilder::CustomCall
.
Memanggil fungsi yang disediakan pengguna dalam komputasi.
CustomCall(target_name, args..., shape)
Argumen | Jenis | Semantik |
---|---|---|
target_name | string | Nama fungsinya. Instruksi panggilan akan dikeluarkan yang menargetkan nama simbol ini. |
args | urutan N XlaOp s | N argumen bertipe arbitrer, yang akan diteruskan ke fungsi. |
shape | Shape | Bentuk keluaran dari fungsi tersebut |
Tanda tangannya sama, apa pun arity atau tipe argumennya:
extern "C" void target_name(void* out, void** in);
Misalnya jika CustomCall digunakan sebagai berikut:
let x = f32[2] {1,2};
let y = f32[2x3] { {10, 20, 30}, {40, 50, 60} };
CustomCall("myfunc", {x, y}, f32[3x3])
Berikut adalah contoh implementasi myfunc
:
extern "C" void myfunc(void* out, void** in) {
float (&x)[2] = *static_cast<float(*)[2]>(in[0]);
float (&y)[2][3] = *static_cast<float(*)[2][3]>(in[1]);
EXPECT_EQ(1, x[0]);
EXPECT_EQ(2, x[1]);
EXPECT_EQ(10, y[0][0]);
EXPECT_EQ(20, y[0][1]);
EXPECT_EQ(30, y[0][2]);
EXPECT_EQ(40, y[1][0]);
EXPECT_EQ(50, y[1][1]);
EXPECT_EQ(60, y[1][2]);
float (&z)[3][3] = *static_cast<float(*)[3][3]>(out);
z[0][0] = x[1] + y[1][0];
// ...
}
Fungsi yang disediakan pengguna tidak boleh memiliki efek samping dan pelaksanaannya harus idempoten.
Dot
Lihat juga XlaBuilder::Dot
.
Dot(lhs, rhs)
Argumen | Jenis | Semantik |
---|---|---|
lhs | XlaOp | susunan tipe T |
rhs | XlaOp | susunan tipe T |
Semantik yang tepat dari operasi ini bergantung pada jajaran operan:
Memasukkan | Keluaran | Semantik |
---|---|---|
vektor [n] vektor dot [n] | skalar | produk titik vektor |
matriks [mxk] vektor dot [k] | vektor [m] | perkalian matriks-vektor |
matriks [mxk] matriks dot [kxn] | matriks [mxn] | perkalian matriks-matriks |
Operasi ini melakukan penjumlahan produk pada dimensi kedua lhs
(atau dimensi pertama jika memiliki peringkat 1) dan dimensi pertama rhs
. Ini adalah dimensi yang "dikontrak". Dimensi kontrak lhs
dan rhs
harus berukuran sama. Dalam prakteknya dapat digunakan untuk melakukan perkalian titik antar vektor, perkalian vektor/matriks, atau perkalian matriks/matriks.
titikjenderal
Lihat juga XlaBuilder::DotGeneral
.
DotGeneral(lhs, rhs, dimension_numbers)
Argumen | Jenis | Semantik |
---|---|---|
lhs | XlaOp | susunan tipe T |
rhs | XlaOp | susunan tipe T |
dimension_numbers | DotDimensionNumbers | nomor dimensi kontrak dan batch |
Seperti Titik, tetapi memungkinkan nomor dimensi kontrak dan batch ditentukan untuk 'lhs' dan 'rhs'.
Bidang Angka Dimensi Dot | Jenis | Semantik |
---|---|---|
'dimensi_kontrak_lhs' | mengulangi int64 | nomor dimensi kontrak 'lhs' |
'dimensi_kontrak_rhs' | mengulangi int64 | nomor dimensi kontrak 'rhs' |
'lhs_batch_dimension' | mengulangi int64 | nomor dimensi batch 'lhs' |
'dimensi_batch_rhs' | mengulangi int64 | nomor dimensi batch 'rhs' |
DotGeneral melakukan penjumlahan produk pada dimensi kontrak yang ditentukan dalam 'angka_dimensi'.
Nomor dimensi kontrak terkait dari 'lhs' dan 'rhs' tidak harus sama tetapi harus mempunyai ukuran dimensi yang sama.
Contoh dengan nomor dimensi kontrak:
lhs = { {1.0, 2.0, 3.0},
{4.0, 5.0, 6.0} }
rhs = { {1.0, 1.0, 1.0},
{2.0, 2.0, 2.0} }
DotDimensionNumbers dnums;
dnums.add_lhs_contracting_dimensions(1);
dnums.add_rhs_contracting_dimensions(1);
DotGeneral(lhs, rhs, dnums) -> { {6.0, 12.0},
{15.0, 30.0} }
Nomor dimensi batch terkait dari 'lhs' dan 'rhs' harus memiliki ukuran dimensi yang sama.
Contoh dengan nomor dimensi batch (ukuran batch 2, matriks 2x2):
lhs = { { {1.0, 2.0},
{3.0, 4.0} },
{ {5.0, 6.0},
{7.0, 8.0} } }
rhs = { { {1.0, 0.0},
{0.0, 1.0} },
{ {1.0, 0.0},
{0.0, 1.0} } }
DotDimensionNumbers dnums;
dnums.add_lhs_contracting_dimensions(2);
dnums.add_rhs_contracting_dimensions(1);
dnums.add_lhs_batch_dimensions(0);
dnums.add_rhs_batch_dimensions(0);
DotGeneral(lhs, rhs, dnums) -> { { {1.0, 2.0},
{3.0, 4.0} },
{ {5.0, 6.0},
{7.0, 8.0} } }
Memasukkan | Keluaran | Semantik |
---|---|---|
[b0, m, k] dot [b0, k, n] | [b0, m, n] | kumpulan matmul |
[b0, b1, m, k] dot [b0, b1, k, n] | [b0, b1, m, n] | kumpulan matmul |
Oleh karena itu, nomor dimensi yang dihasilkan dimulai dengan dimensi batch, kemudian dimensi non-kontrak/non-batch 'lhs', dan terakhir dimensi non-kontrak/non-batch 'rhs'.
Irisan Dinamis
Lihat juga XlaBuilder::DynamicSlice
.
DynamicSlice mengekstrak sub-array dari array input pada start_indices
dinamis. Ukuran irisan di setiap dimensi diteruskan dalam size_indices
, yang menentukan titik akhir interval irisan eksklusif di setiap dimensi: [mulai, mulai + ukuran). Bentuk start_indices
harus rank == 1, dengan ukuran dimensi sama dengan rank operand
.
DynamicSlice(operand, start_indices, size_indices)
Argumen | Jenis | Semantik |
---|---|---|
operand | XlaOp | N array dimensi tipe T |
start_indices | urutan N XlaOp | List of N scalar integers containing the starting indices of the slice for each dimension. Value must be greater than or equal to zero. |
size_indices | ArraySlice<int64> | List of N integers containing the slice size for each dimension. Each value must be strictly greater than zero, and start + size must be less than or equal to the size of the dimension to avoid wrapping modulo dimension size. |
The effective slice indices are computed by applying the following transformation for each index i
in [1, N)
before performing the slice:
start_indices[i] = clamp(start_indices[i], 0, operand.dimension_size[i] - size_indices[i])
This ensures that the extracted slice is always in-bounds with respect to the operand array. If the slice is in-bounds before the transformation is applied, the transformation has no effect.
1-dimensional example:
let a = {0.0, 1.0, 2.0, 3.0, 4.0}
let s = {2}
DynamicSlice(a, s, {2}) produces:
{2.0, 3.0}
2-dimensional example:
let b =
{ {0.0, 1.0, 2.0},
{3.0, 4.0, 5.0},
{6.0, 7.0, 8.0},
{9.0, 10.0, 11.0} }
let s = {2, 1}
DynamicSlice(b, s, {2, 2}) produces:
{ { 7.0, 8.0},
{10.0, 11.0} }
DynamicUpdateSlice
See also XlaBuilder::DynamicUpdateSlice
.
DynamicUpdateSlice generates a result which is the value of the input array operand
, with a slice update
overwritten at start_indices
. The shape of update
determines the shape of the sub-array of the result which is updated. The shape of start_indices
must be rank == 1, with dimension size equal to the rank of operand
.
DynamicUpdateSlice(operand, update, start_indices)
Arguments | Type | Semantics |
---|---|---|
operand | XlaOp | N dimensional array of type T |
update | XlaOp | N dimensional array of type T containing the slice update. Each dimension of update shape must be strictly greater than zero, and start + update must be less than or equal to the operand size for each dimension to avoid generating out-of-bounds update indices. |
start_indices | sequence of N XlaOp | List of N scalar integers containing the starting indices of the slice for each dimension. Value must be greater than or equal to zero. |
The effective slice indices are computed by applying the following transformation for each index i
in [1, N)
before performing the slice:
start_indices[i] = clamp(start_indices[i], 0, operand.dimension_size[i] - update.dimension_size[i])
This ensures that the updated slice is always in-bounds with respect to the operand array. If the slice is in-bounds before the transformation is applied, the transformation has no effect.
1-dimensional example:
let a = {0.0, 1.0, 2.0, 3.0, 4.0}
let u = {5.0, 6.0}
let s = {2}
DynamicUpdateSlice(a, u, s) produces:
{0.0, 1.0, 5.0, 6.0, 4.0}
2-dimensional example:
let b =
{ {0.0, 1.0, 2.0},
{3.0, 4.0, 5.0},
{6.0, 7.0, 8.0},
{9.0, 10.0, 11.0} }
let u =
{ {12.0, 13.0},
{14.0, 15.0},
{16.0, 17.0} }
let s = {1, 1}
DynamicUpdateSlice(b, u, s) produces:
{ {0.0, 1.0, 2.0},
{3.0, 12.0, 13.0},
{6.0, 14.0, 15.0},
{9.0, 16.0, 17.0} }
Element-wise binary arithmetic operations
See also XlaBuilder::Add
.
A set of element-wise binary arithmetic operations is supported.
Op(lhs, rhs)
Where Op
is one of Add
(addition), Sub
(subtraction), Mul
(multiplication), Div
(division), Rem
(remainder), Max
(maximum), Min
(minimum), Atan2
(arctangent of y/x), LogicalAnd
(logical AND), LogicalOr
(logical OR), or LogicalXor
(logical XOR).
Arguments | Type | Semantics |
---|---|---|
lhs | XlaOp | left-hand-side operand: array of type T |
rhs | XlaOp | right-hand-side operand: array of type T |
The arguments' shapes have to be either similar or compatible. See the broadcasting documentation about what it means for shapes to be compatible. The result of an operation has a shape which is the result of broadcasting the two input arrays. In this variant, operations between arrays of different ranks are not supported, unless one of the operands is a scalar.
When Op
is Rem
, the sign of the result is taken from the dividend, and the absolute value of the result is always less than the divisor's absolute value.
Integer division overflow (signed/unsigned division/remainder by zero or signed division/remainder of INT_SMIN
with -1
) produces an implementation defined value.
An alternative variant with different-rank broadcasting support exists for these operations:
Op(lhs, rhs, broadcast_dimensions)
Where Op
is the same as above. This variant of the operation should be used for arithmetic operations between arrays of different ranks (such as adding a matrix to a vector).
The additional broadcast_dimensions
operand is a slice of integers used to expand the rank of the lower-rank operand up to the rank of the higher-rank operand. broadcast_dimensions
maps the dimensions of the lower-rank shape to the dimensions of the higher-rank shape. The unmapped dimensions of the expanded shape are filled with dimensions of size one. Degenerate-dimension broadcasting then broadcasts the shapes along these degenerate dimensions to equalize the shapes of both operands. The semantics are described in detail on the broadcasting page .
Element-wise comparison operations
See also XlaBuilder::Eq
.
A set of standard element-wise binary comparison operations is supported. Note that standard IEEE 754 floating-point comparison semantics apply when comparing floating-point types.
Op(lhs, rhs)
Where Op
is one of Eq
(equal-to), Ne
(not equal-to), Ge
(greater-or-equal-than), Gt
(greater-than), Le
(less-or-equal-than), Lt
(less-than). Another set of operators, EqTotalOrder, NeTotalOrder, GeTotalOrder, GtTotalOrder, LeTotalOrder, and LtTotalOrder, provide the same functionalities, except that they additionally support a total order over the floating point numbers, by enforcing -NaN < -Inf < -Finite < -0 < +0 < +Finite < +Inf < +NaN.
Arguments | Type | Semantics |
---|---|---|
lhs | XlaOp | left-hand-side operand: array of type T |
rhs | XlaOp | right-hand-side operand: array of type T |
The arguments' shapes have to be either similar or compatible. See the broadcasting documentation about what it means for shapes to be compatible. The result of an operation has a shape which is the result of broadcasting the two input arrays with the element type PRED
. In this variant, operations between arrays of different ranks are not supported, unless one of the operands is a scalar.
An alternative variant with different-rank broadcasting support exists for these operations:
Op(lhs, rhs, broadcast_dimensions)
Where Op
is the same as above. This variant of the operation should be used for comparison operations between arrays of different ranks (such as adding a matrix to a vector).
The additional broadcast_dimensions
operand is a slice of integers specifying the dimensions to use for broadcasting the operands. The semantics are described in detail on the broadcasting page .
Element-wise unary functions
XlaBuilder supports these element-wise unary functions:
Abs(operand)
Element-wise abs x -> |x|
.
Ceil(operand)
Element-wise ceil x -> ⌈x⌉
.
Clz(operand)
Element-wise counting of the number of leading zeros x -> clz(x)
.
Cos(operand)
Element-wise cosine x -> cos(x)
.
Exp(operand)
Element-wise natural exponential x -> e^x
.
Floor(operand)
Element-wise floor x -> ⌊x⌋
.
Imag(operand)
Element-wise imaginary part of a complex (or real) shape. x -> imag(x)
. If the operand is a floating point type, returns 0.
IsFinite(operand)
Tests whether each element of operand
is finite, ie, is not positive or negative infinity, and is not NaN
. Returns an array of PRED
values with the same shape as the input, where each element is true
if and only if the corresponding input element is finite.
Log(operand)
Element-wise natural logarithm x -> ln(x)
.
Log1p(operand)
Element-wise natural logarithm of a number plus one x -> ln(x + 1)
LogicalNot(operand)
Element-wise logical not x -> !(x)
.
Logistic(operand)
Element-wise logistic function computation x -> logistic(x)
.
PopulationCount(operand)
Computes the number of bits set in each element of operand
.
Neg(operand)
Element-wise negation x -> -x
.
Real(operand)
Element-wise real part of a complex (or real) shape. x -> real(x)
. If the operand is a floating point type, returns the same value.
Rsqrt(operand)
Element-wise reciprocal of square root operation x -> 1.0 / sqrt(x)
.
Sign(operand)
Element-wise sign operation x -> sgn(x)
where
\[\text{sgn}(x) = \begin{cases} -1 & x < 0\\ -0 & x = -0\\ NaN & x = NaN\\ +0 & x = +0\\ 1 & x > 0 \end{cases}\]
using the comparison operator of the element type of operand
.
Sqrt(operand)
Element-wise square root operation x -> sqrt(x)
.
Cbrt(operand)
Element-wise cubic root operation x -> cbrt(x)
.
Tan(operand)
Element-wise tangent x -> tan(x)
.
Tanh(operand)
Element-wise hyperbolic tangent x -> tanh(x)
.
Round(operand)
Element-wise rounding, ties away from zero.
RoundNearestEven(operand)
Element-wise rounding, ties to nearest even.
Arguments | Type | Semantics |
---|---|---|
operand | XlaOp | The operand to the function |
The function is applied to each element in the operand
array, resulting in an array with the same shape. It is allowed for operand
to be a scalar (rank 0).
Fft
The XLA FFT operation implements the forward and inverse Fourier Transforms for real and complex inputs/outputs. Multidimensional FFTs on up to 3 axes are supported.
See also XlaBuilder::Fft
.
Arguments | Type | Semantics |
---|---|---|
operand | XlaOp | The array we are Fourier transforming. |
fft_type | FftType | See the table below. |
fft_length | ArraySlice<int64> | The time-domain lengths of the axes being transformed. This is needed in particular for IRFFT to right-size the innermost axis, since RFFT(fft_length=[16]) has the same output shape as RFFT(fft_length=[17]) . |
FftType | Semantics |
---|---|
FFT | Forward complex-to-complex FFT. Shape is unchanged. |
IFFT | Inverse complex-to-complex FFT. Shape is unchanged. |
RFFT | Forward real-to-complex FFT. Shape of the innermost axis is reduced to fft_length[-1] // 2 + 1 if fft_length[-1] is a non-zero value, omitting the reversed conjugate part of the transformed signal beyond the Nyquist frequency. |
IRFFT | Inverse real-to-complex FFT (ie takes complex, returns real). Shape of the innermost axis is expanded to fft_length[-1] if fft_length[-1] is a non-zero value, inferring the part of the transformed signal beyond the Nyquist frequency from the reverse conjugate of the 1 to fft_length[-1] // 2 + 1 entries. |
Multidimensional FFT
When more than 1 fft_length
is provided, this is equivalent to applying a cascade of FFT operations to each of the innermost axes. Note that for the real->complex and complex->real cases, the innermost axis transform is (effectively) performed first (RFFT; last for IRFFT), which is why the innermost axis is the one which changes size. Other axis transforms will then be complex->complex.
Implementation details
CPU FFT is backed by Eigen's TensorFFT. GPU FFT uses cuFFT.
Gather
The XLA gather operation stitches together several slices (each slice at a potentially different runtime offset) of an input array.
General Semantics
See also XlaBuilder::Gather
. For a more intuitive description, see the "Informal Description" section below.
gather(operand, start_indices, offset_dims, collapsed_slice_dims, slice_sizes, start_index_map)
Arguments | Type | Semantics |
---|---|---|
operand | XlaOp | The array we're gathering from. |
start_indices | XlaOp | Array containing the starting indices of the slices we gather. |
index_vector_dim | int64 | The dimension in start_indices that "contains" the starting indices. See below for a detailed description. |
offset_dims | ArraySlice<int64> | The set of dimensions in the output shape that offset into an array sliced from operand. |
slice_sizes | ArraySlice<int64> | slice_sizes[i] is the bounds for the slice on dimension i . |
collapsed_slice_dims | ArraySlice<int64> | The set of dimensions in each slice that are collapsed away. These dimensions must have size 1. |
start_index_map | ArraySlice<int64> | A map that describes how to map indices in start_indices to legal indices into operand. |
indices_are_sorted | bool | Whether the indices are guaranteed to be sorted by the caller. |
unique_indices | bool | Whether the indices are guaranteed to be unique by the caller. |
For convenience, we label dimensions in the output array not in offset_dims
as batch_dims
.
The output is an array of rank batch_dims.size
+ offset_dims.size
.
The operand.rank
must equal the sum of offset_dims.size
and collapsed_slice_dims.size
. Also, slice_sizes.size
has to be equal to operand.rank
.
If index_vector_dim
is equal to start_indices.rank
we implicitly consider start_indices
to have a trailing 1
dimension (ie if start_indices
was of shape [6,7]
and index_vector_dim
is 2
then we implicitly consider the shape of start_indices
to be [6,7,1]
).
The bounds for the output array along dimension i
is computed as follows:
If
i
is present inbatch_dims
(ie is equal tobatch_dims[k]
for somek
) then we pick the corresponding dimension bounds out ofstart_indices.shape
, skippingindex_vector_dim
(ie pickstart_indices.shape.dims
[k
] ifk
<index_vector_dim
andstart_indices.shape.dims
[k
+1
] otherwise).If
i
is present inoffset_dims
(ie equal tooffset_dims
[k
] for somek
) then we pick the corresponding bound out ofslice_sizes
after accounting forcollapsed_slice_dims
(ie we pickadjusted_slice_sizes
[k
] whereadjusted_slice_sizes
isslice_sizes
with the bounds at indicescollapsed_slice_dims
removed).
Formally, the operand index In
corresponding to a given output index Out
is calculated as follows:
Let
G
= {Out
[k
] fork
inbatch_dims
}. UseG
to slice out a vectorS
such thatS
[i
] =start_indices
[Combine(G
,i
)] where Combine(A, b) inserts b at positionindex_vector_dim
into A. Note that this is well defined even ifG
is empty -- ifG
is empty thenS
=start_indices
.Create a starting index,
S
in
, intooperand
usingS
by scatteringS
usingstart_index_map
. More precisely:S
in
[start_index_map
[k
]] =S
[k
] ifk
<start_index_map.size
.S
in
[_
] =0
otherwise.
Create an index
O
in
intooperand
by scattering the indices at the offset dimensions inOut
according to thecollapsed_slice_dims
set. More precisely:O
in
[remapped_offset_dims
(k
)] =Out
[offset_dims
[k
]] ifk
<offset_dims.size
(remapped_offset_dims
is defined below).O
in
[_
] =0
otherwise.
In
isO
in
+S
in
where + is element-wise addition.
remapped_offset_dims
is a monotonic function with domain [ 0
, offset_dims.size
) and range [ 0
, operand.rank
) \ collapsed_slice_dims
. So if, eg, offset_dims.size
is 4
, operand.rank
is 6
and collapsed_slice_dims
is { 0
, 2
} then remapped_offset_dims
is { 0
→ 1
, 1
→ 3
, 2
→ 4
, 3
→ 5
}.
If indices_are_sorted
is set to true then XLA can assume that start_indices
are sorted (in ascending start_index_map
order) by the user. If they are not then the semantics is implementation defined.
If unique_indices
is set to true then XLA can assume that all element scattered to are unique. So XLA could use non-atomic operations. If unique_indices
is set to true and the indices being scattered to are not unique then the semantics is implementation defined.
Informal Description and Examples
Informally, every index Out
in the output array corresponds to an element E
in the operand array, computed as follows:
We use the batch dimensions in
Out
to look up a starting index fromstart_indices
.We use
start_index_map
to map the starting index (whose size may be less than operand.rank) to a "full" starting index into theoperand
.We dynamic-slice out a slice with size
slice_sizes
using the full starting index.We reshape the slice by collapsing the
collapsed_slice_dims
dimensions. Since all collapsed slice dimensions must have a bound of 1, this reshape is always legal.We use the offset dimensions in
Out
to index into this slice to get the input element,E
, corresponding to output indexOut
.
index_vector_dim
is set to start_indices.rank
- 1
in all of the examples that follow. More interesting values for index_vector_dim
do not change the operation fundamentally, but make the visual representation more cumbersome.
To get an intuition on how all of the above fits together, let's look at an example that gathers 5 slices of shape [8,6]
from a [16,11]
array. The position of a slice into the [16,11]
array can be represented as an index vector of shape S64[2]
, so the set of 5 positions can be represented as a S64[5,2]
array.
The behavior of the gather operation can then be depicted as an index transformation that takes [ G
, O
0
, O
1
], an index in the output shape, and maps it to an element in the input array in the following way:
We first select an ( X
, Y
) vector from the gather indices array using G
. The element in the output array at index [ G
, O
0
, O
1
] is then the element in the input array at index [ X
+ O
0
, Y
+ O
1
].
slice_sizes
is [8,6]
, which decides the range of O 0
and O 1
, and this in turn decides the bounds of the slice.
This gather operation acts as a batch dynamic slice with G
as the batch dimension.
The gather indices may be multidimensional. For instance, a more general version of the example above using a "gather indices" array of shape [4,5,2]
would translate indices like this:
Again, this acts as a batch dynamic slice G
0
and G
1
as the batch dimensions. The slice size is still [8,6]
.
The gather operation in XLA generalizes the informal semantics outlined above in the following ways:
We can configure which dimensions in the output shape are the offset dimensions (dimensions containing
O
0
,O
1
in the last example). The output batch dimensions (dimensions containingG
0
,G
1
in the last example) are defined to be the output dimensions that are not offset dimensions.The number of output offset dimensions explicitly present in the output shape may be smaller than the input rank. These "missing" dimensions, which are listed explicitly as
collapsed_slice_dims
, must have a slice size of1
. Since they have a slice size of1
the only valid index for them is0
and eliding them does not introduce ambiguity.The slice extracted from the "Gather Indices" array ((
X
,Y
) in the last example) may have fewer elements than the input array rank, and an explicit mapping dictates how the index should be expanded to have the same rank as the input.
As a final example, we use (2) and (3) to implement tf.gather_nd
:
G
0
and G
1
are used to slice out a starting index from the gather indices array as usual, except the starting index has only one element, X
. Similarly, there is only one output offset index with the value O
0
. However, before being used as indices into the input array, these are expanded in accordance to "Gather Index Mapping" ( start_index_map
in the formal description) and "Offset Mapping" ( remapped_offset_dims
in the formal description) into [ X
, 0
] and [ 0
, O
0
] respectively, adding up to [ X
, O
0
]. In other words, the output index [ G
0
, G
1
, O
0
] maps to the input index [ GatherIndices
[ G
0
, G
1
, 0
], O
0
] which gives us the semantics for tf.gather_nd
.
slice_sizes
for this case is [1,11]
. Intuitively this means that every index X
in the gather indices array picks an entire row and the result is the concatenation of all these rows.
GetDimensionSize
See also XlaBuilder::GetDimensionSize
.
Returns the size of the given dimension of the operand. The operand must be array shaped.
GetDimensionSize(operand, dimension)
Arguments | Type | Semantics |
---|---|---|
operand | XlaOp | n dimensional input array |
dimension | int64 | A value in the interval [0, n) that specifies the dimension |
SetDimensionSize
See also XlaBuilder::SetDimensionSize
.
Sets the dynamic size of XlaOp's given dimension. The operand must be array shaped.
SetDimensionSize(operand, size, dimension)
Arguments | Type | Semantics |
---|---|---|
operand | XlaOp | n dimensional input array. |
size | XlaOp | int32 representing the runtime dynamic size. |
dimension | int64 | A value in the interval [0, n) that specifies the dimension. |
Pass through the operand as result, with dynamic dimension tracked by the compiler.
Padded values will be ignored by downstream reduction ops.
let v: f32[10] = f32[10]{1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
let five: s32 = 5;
let six: s32 = 6;
// Setting dynamic dimension size doesn't change the upper bound of the static
// shape.
let padded_v_five: f32[10] = set_dimension_size(v, five, /*dimension=*/0);
let padded_v_six: f32[10] = set_dimension_size(v, six, /*dimension=*/0);
// sum == 1 + 2 + 3 + 4 + 5
let sum:f32[] = reduce_sum(padded_v_five);
// product == 1 * 2 * 3 * 4 * 5
let product:f32[] = reduce_product(padded_v_five);
// Changing padding size will yield different result.
// sum == 1 + 2 + 3 + 4 + 5 + 6
let sum:f32[] = reduce_sum(padded_v_six);
GetTupleElement
See also XlaBuilder::GetTupleElement
.
Indexes into a tuple with a compile-time-constant value.
The value must be a compile-time-constant so that shape inference can determine the type of the resulting value.
This is analogous to std::get<int N>(t)
in C++. Conceptually:
let v: f32[10] = f32[10]{0, 1, 2, 3, 4, 5, 6, 7, 8, 9};
let s: s32 = 5;
let t: (f32[10], s32) = tuple(v, s);
let element_1: s32 = gettupleelement(t, 1); // Inferred shape matches s32.
See also tf.tuple
.
Infeed
See also XlaBuilder::Infeed
.
Infeed(shape)
Argument | Type | Semantics |
---|---|---|
shape | Shape | Shape of the data read from the Infeed interface. The layout field of the shape must be set to match the layout of the data sent to the device; otherwise its behavior is undefined. |
Reads a single data item from the implicit Infeed streaming interface of the device, interpreting the data as the given shape and its layout, and returns a XlaOp
of the data. Multiple Infeed operations are allowed in a computation, but there must be a total order among the Infeed operations. For example, two Infeeds in the code below have a total order since there is a dependency between the while loops.
result1 = while (condition, init = init_value) {
Infeed(shape)
}
result2 = while (condition, init = result1) {
Infeed(shape)
}
Nested tuple shapes are not supported. For an empty tuple shape, the Infeed operation is effectively a no-op and proceeds without reading any data from the Infeed of the device.
Iota
See also XlaBuilder::Iota
.
Iota(shape, iota_dimension)
Builds a constant literal on device rather than a potentially large host transfer. Creates an array that has specified shape and holds values starting at zero and incrementing by one along the specified dimension. For floating-point types, the produced array is equivalent to ConvertElementType(Iota(...))
where the Iota
is of integral type and the conversion is to the floating-point type.
Arguments | Type | Semantics |
---|---|---|
shape | Shape | Shape of the array created by Iota() |
iota_dimension | int64 | The dimension to increment along. |
For example, Iota(s32[4, 8], 0)
returns
[[0, 0, 0, 0, 0, 0, 0, 0 ],
[1, 1, 1, 1, 1, 1, 1, 1 ],
[2, 2, 2, 2, 2, 2, 2, 2 ],
[3, 3, 3, 3, 3, 3, 3, 3 ]]
Iota(s32[4, 8], 1)
returns
[[0, 1, 2, 3, 4, 5, 6, 7 ],
[0, 1, 2, 3, 4, 5, 6, 7 ],
[0, 1, 2, 3, 4, 5, 6, 7 ],
[0, 1, 2, 3, 4, 5, 6, 7 ]]
Map
See also XlaBuilder::Map
.
Map(operands..., computation)
Arguments | Type | Semantics |
---|---|---|
operands | sequence of N XlaOp s | N arrays of types T 0..T {N-1} |
computation | XlaComputation | computation of type T_0, T_1, ..., T_{N + M -1} -> S with N parameters of type T and M of arbitrary type |
dimensions | int64 array | array of map dimensions |
Applies a scalar function over the given operands
arrays, producing an array of the same dimensions where each element is the result of the mapped function applied to the corresponding elements in the input arrays.
The mapped function is an arbitrary computation with the restriction that it has N inputs of scalar type T
and a single output with type S
. The output has the same dimensions as the operands except that the element type T is replaced with S.
For example: Map(op1, op2, op3, computation, par1)
maps elem_out <- computation(elem1, elem2, elem3, par1)
at each (multi-dimensional) index in the input arrays to produce the output array.
OptimizationBarrier
Blocks any optimization pass from moving computations across the barrier.
Ensures that all inputs are evaluated before any operators that depend on the barrier's outputs.
Pad
See also XlaBuilder::Pad
.
Pad(operand, padding_value, padding_config)
Arguments | Type | Semantics |
---|---|---|
operand | XlaOp | array of type T |
padding_value | XlaOp | scalar of type T to fill in the added padding |
padding_config | PaddingConfig | padding amount on both edges (low, high) and between the elements of each dimension |
Expands the given operand
array by padding around the array as well as between the elements of the array with the given padding_value
. padding_config
specifies the amount of edge padding and the interior padding for each dimension.
PaddingConfig
is a repeated field of PaddingConfigDimension
, which contains three fields for each dimension: edge_padding_low
, edge_padding_high
, and interior_padding
.
edge_padding_low
and edge_padding_high
specify the amount of padding added at the low-end (next to index 0) and the high-end (next to the highest index) of each dimension respectively. The amount of edge padding can be negative -- the absolute value of negative padding indicates the number of elements to remove from the specified dimension.
interior_padding
specifies the amount of padding added between any two elements in each dimension; it may not be negative. Interior padding occurs logically before edge padding, so in the case of negative edge padding, elements are removed from the interior-padded operand.
This operation is a no-op if the edge padding pairs are all (0, 0) and the interior padding values are all 0. The figure below shows examples of different edge_padding
and interior_padding
values for a two-dimensional array.

Recv
See also XlaBuilder::Recv
.
Recv(shape, channel_handle)
Arguments | Type | Semantics |
---|---|---|
shape | Shape | shape of the data to receive |
channel_handle | ChannelHandle | unique identifier for each send/recv pair |
Receives data of the given shape from a Send
instruction in another computation that shares the same channel handle. Returns a XlaOp for the received data.
The client API of Recv
operation represents synchronous communication. However, the instruction is internally decomposed into 2 HLO instructions ( Recv
and RecvDone
) to enable asynchronous data transfers. See also HloInstruction::CreateRecv
and HloInstruction::CreateRecvDone
.
Recv(const Shape& shape, int64 channel_id)
Allocates resources required to receive data from a Send
instruction with the same channel_id. Returns a context for the allocated resources, which is used by a following RecvDone
instruction to wait for the completion of the data transfer. The context is a tuple of {receive buffer (shape), request identifier (U32)} and it can only be used by a RecvDone
instruction.
RecvDone(HloInstruction context)
Given a context created by a Recv
instruction, waits for the data transfer to complete and returns the received data.
Reduce
See also XlaBuilder::Reduce
.
Applies a reduction function to one or more arrays in parallel.
Reduce(operands..., init_values..., computation, dimensions)
Arguments | Type | Semantics |
---|---|---|
operands | Sequence of N XlaOp | N arrays of types T_0, ..., T_{N-1} . |
init_values | Sequence of N XlaOp | N scalars of types T_0, ..., T_{N-1} . |
computation | XlaComputation | computation of type T_0, ..., T_{N-1}, T_0, ..., T_{N-1} -> Collate(T_0, ..., T_{N-1}) . |
dimensions | int64 array | unordered array of dimensions to reduce. |
Where:
- N is required to be greater or equal to 1.
- The computation has to be "roughly" associative (see below).
- All input arrays must have the same dimensions.
- All initial values have to form an identity under
computation
. - If
N = 1
,Collate(T)
isT
. - If
N > 1
,Collate(T_0, ..., T_{N-1})
is a tuple ofN
elements of typeT
.
This operation reduces one or more dimensions of each input array into scalars. The rank of each returned array is rank(operand) - len(dimensions)
. The output of the op is Collate(Q_0, ..., Q_N)
where Q_i
is an array of type T_i
, the dimensions of which are described below.
Different backends are allowed to reassociate the reduction computation. This can lead to numerical differences, as some reduction functions like addition are not associative for floats. However, if the range of the data is limited, floating-point addition is close enough to being associative for most practical uses.
Contoh
When reducing across one dimension in a single 1D array with values [10, 11, 12, 13]
, with reduction function f
(this is computation
) then that could be computed as
f(10, f(11, f(12, f(init_value, 13)))
but there are also many other possibilities, eg
f(init_value, f(f(10, f(init_value, 11)), f(f(init_value, 12), f(init_value, 13))))
The following is a rough pseudo-code example of how reduction could be implemented, using summation as the reduction computation with an initial value of 0.
result_shape <- remove all dims in dimensions from operand_shape
# Iterate over all elements in result_shape. The number of r's here is equal
# to the rank of the result
for r0 in range(result_shape[0]), r1 in range(result_shape[1]), ...:
# Initialize this result element
result[r0, r1...] <- 0
# Iterate over all the reduction dimensions
for d0 in range(dimensions[0]), d1 in range(dimensions[1]), ...:
# Increment the result element with the value of the operand's element.
# The index of the operand's element is constructed from all ri's and di's
# in the right order (by construction ri's and di's together index over the
# whole operand shape).
result[r0, r1...] += operand[ri... di]
Here's an example of reducing a 2D array (matrix). The shape has rank 2, dimension 0 of size 2 and dimension 1 of size 3:

Results of reducing dimensions 0 or 1 with an "add" function:

Note that both reduction results are 1D arrays. The diagram shows one as column and another as row just for visual convenience.
For a more complex example, here is a 3D array. Its rank is 3, dimension 0 of size 4, dimension 1 of size 2 and dimension 2 of size 3. For simplicity, the values 1 to 6 are replicated across dimension 0.

Similarly to the 2D example, we can reduce just one dimension. If we reduce dimension 0, for example, we get a rank-2 array where all values across dimension 0 were folded into a scalar:
| 4 8 12 |
| 16 20 24 |
If we reduce dimension 2, we also get a rank-2 array where all values across dimension 2 were folded into a scalar:
| 6 15 |
| 6 15 |
| 6 15 |
| 6 15 |
Note that the relative order between the remaining dimensions in the input is preserved in the output, but some dimensions may get assigned new numbers (since the rank changes).
We can also reduce multiple dimensions. Add-reducing dimensions 0 and 1 produces the 1D array [20, 28, 36]
.
Reducing the 3D array over all its dimensions produces the scalar 84
.
Variadic Reduce
When N > 1
, reduce function application is slightly more complex, as it is applied simultaneously to all inputs. The operands are supplied to the computation in the following order:
- Running reduced value for the first operand
- ...
- Running reduced value for the N'th operand
- Input value for the first operand
- ...
- Input value for the N'th operand
For example, consider the following reduction function, which can be used to compute the max and the argmax of a 1-D array in parallel:
f: (Float, Int, Float, Int) -> Float, Int
f(max, argmax, value, index):
if value >= max:
return (value, index)
else:
return (max, argmax)
For 1-D Input arrays V = Float[N], K = Int[N]
, and init values I_V = Float, I_K = Int
, the result f_(N-1)
of reducing across the only input dimension is equivalent to the following recursive application:
f_0 = f(I_V, I_K, V_0, K_0)
f_1 = f(f_0.first, f_0.second, V_1, K_1)
...
f_(N-1) = f(f_(N-2).first, f_(N-2).second, V_(N-1), K_(N-1))
Applying this reduction to an array of values, and an array of sequential indices (ie iota), will co-iterate over the arrays, and return a tuple containing the maximal value and the matching index.
ReducePrecision
See also XlaBuilder::ReducePrecision
.
Models the effect of converting floating-point values to a lower-precision format (such as IEEE-FP16) and back to the original format. The number of exponent and mantissa bits in the lower-precision format can be specified arbitrarily, although all bit sizes may not be supported on all hardware implementations.
ReducePrecision(operand, mantissa_bits, exponent_bits)
Arguments | Type | Semantics |
---|---|---|
operand | XlaOp | array of floating-point type T . |
exponent_bits | int32 | number of exponent bits in lower-precision format |
mantissa_bits | int32 | number of mantissa bits in lower-precision format |
The result is an array of type T
. The input values are rounded to the nearest value representable with the given number of mantissa bits (using "ties to even" semantics), and any values that exceed the range specified by the number of exponent bits are clamped to positive or negative infinity. NaN
values are retained, although they may be converted to canonical NaN
values.
The lower-precision format must have at least one exponent bit (in order to distinguish a zero value from an infinity, since both have a zero mantissa), and must have a non-negative number of mantissa bits. The number of exponent or mantissa bits may exceed the corresponding value for type T
; the corresponding portion of the conversion is then simply a no-op.
ReduceScatter
See also XlaBuilder::ReduceScatter
.
ReduceScatter is a collective operation that effectively does an AllReduce and then scatters the result by splitting it into shard_count
blocks along the scatter_dimension
and replica i
in the replica group receives the ith
shard.
ReduceScatter(operand, computation, scatter_dim, shard_count, replica_group_ids, channel_id)
Arguments | Type | Semantics |
---|---|---|
operand | XlaOp | Array or a non-empty tuple of arrays to reduce across replicas. |
computation | XlaComputation | Reduction computation |
scatter_dimension | int64 | Dimension to scatter. |
shard_count | int64 | Number of blocks to split scatter_dimension |
replica_groups | vector of vectors of int64 | Groups between which the reductions are performed |
channel_id | optional int64 | Optional channel ID for cross-module communication |
- When
operand
is a tuple of arrays, the reduce-scatter is performed on each element of the tuple. -
replica_groups
is a list of replica groups between which the reduction is performed (replica id for the current replica can be retrieved usingReplicaId
). The order of replicas in each group determines the order in which the all-reduce result will be scattered.replica_groups
must either be empty (in which case all replicas belong to a single group), or contain the same number of elements as the number of replicas. When there are more than one replica groups, they all must be of the same size. For example,replica_groups = {0, 2}, {1, 3}
performs reduction between the replicas0
and2
, and1
and3
and then scatters the result. -
shard_count
is the size of each replica group. We need this in cases wherereplica_groups
are empty. Ifreplica_groups
is not empty,shard_count
must be equal to the size of each replica group. -
channel_id
is used for cross-module communication: onlyreduce-scatter
operations with the samechannel_id
can communicate with each other.
The output shape is the input shape with the scatter_dimension
made shard_count
times smaller. For example, if there are two replicas and the operand has the value [1.0, 2.25]
and [3.0, 5.25]
respectively on the two replicas, then the output value from this op where scatter_dim
is 0
will be [4.0]
for the first replica and [7.5]
for the second replica.
ReduceWindow
See also XlaBuilder::ReduceWindow
.
Applies a reduction function to all elements in each window of a sequence of N multi-dimensional arrays, producing a single or a tuple of N multi-dimensional arrays as output. Each output array has the same number of elements as the number of valid positions of the window. A pooling layer can be expressed as a ReduceWindow
. Similar to Reduce
, the applied computation
is always passed the init_values
on the left-hand side.
ReduceWindow(operands..., init_values..., computation, window_dimensions, window_strides, padding)
Arguments | Type | Semantics |
---|---|---|
operands | N XlaOps | A sequence of N multi-dimensional arrays of types T_0,..., T_{N-1} , each representing the base area on which the window is placed. |
init_values | N XlaOps | The N starting values for the reduction, one for each of the N operands. See Reduce for details. |
computation | XlaComputation | Reduction function of type T_0, ..., T_{N-1}, T_0, ..., T_{N-1} -> Collate(T_0, ..., T_{N-1}) , to apply to elements in each window of all the input operands. |
window_dimensions | ArraySlice<int64> | array of integers for window dimension values |
window_strides | ArraySlice<int64> | array of integers for window stride values |
base_dilations | ArraySlice<int64> | array of integers for base dilation values |
window_dilations | ArraySlice<int64> | array of integers for window dilation values |
padding | Padding | padding type for window (Padding::kSame, which pads so as to have the same output shape as input if the stride is 1, or Padding::kValid, which uses no padding and "stops" the window once it no longer fits) |
Where:
- N is required to be greater or equal to 1.
- All input arrays must have the same dimensions.
- If
N = 1
,Collate(T)
isT
. - If
N > 1
,Collate(T_0, ..., T_{N-1})
is a tuple ofN
elements of type(T0,...T{N-1})
.
Below code and figure shows an example of using ReduceWindow
. Input is a matrix of size [4x6] and both window_dimensions and window_stride_dimensions are [2x3].
// Create a computation for the reduction (maximum).
XlaComputation max;
{
XlaBuilder builder(client_, "max");
auto y = builder.Parameter(0, ShapeUtil::MakeShape(F32, {}), "y");
auto x = builder.Parameter(1, ShapeUtil::MakeShape(F32, {}), "x");
builder.Max(y, x);
max = builder.Build().value();
}
// Create a ReduceWindow computation with the max reduction computation.
XlaBuilder builder(client_, "reduce_window_2x3");
auto shape = ShapeUtil::MakeShape(F32, {4, 6});
auto input = builder.Parameter(0, shape, "input");
builder.ReduceWindow(
input,
/*init_val=*/builder.ConstantLiteral(LiteralUtil::MinValue(F32)),
*max,
/*window_dimensions=*/{2, 3},
/*window_stride_dimensions=*/{2, 3},
Padding::kValid);

Stride of 1 in a dimension specifies that the position of a window in the dimension is 1 element away from its adjacent window. In order to specify that no windows overlap with each other, window_stride_dimensions should be equal to window_dimensions. The figure below illustrates the use of two different stride values. Padding is applied to each dimension of the input and the calculations are the same as though the input came in with the dimensions it has after padding.

For a non-trivial padding example, consider computing reduce-window minimum (initial value is MAX_FLOAT
) with dimension 3
and stride 2
over the input array [10000, 1000, 100, 10, 1]
. Padding kValid
computes minimums over two valid windows: [10000, 1000, 100]
and [100, 10, 1]
, resulting in the output [100, 1]
. Padding kSame
first pads the array so that the shape after the reduce-window would be the same as input for stride one by adding initial elements on both sides, getting [MAX_VALUE, 10000, 1000, 100, 10, 1, MAX_VALUE]
. Running reduce-window over the padded array operates on three windows [MAX_VALUE, 10000, 1000]
, [1000, 100, 10]
, [10, 1, MAX_VALUE]
, and yields [1000, 10, 1]
.
The evaluation order of the reduction function is arbitrary and may be non-deterministic. Therefore, the reduction function should not be overly sensitive to reassociation. See the discussion about associativity in the context of Reduce
for more details.
ReplicaId
See also XlaBuilder::ReplicaId
.
Returns the unique ID (U32 scalar) of the replica.
ReplicaId()
The unique ID of each replica is an unsigned integer in the interval [0, N)
, where N
is the number of replicas. Since all the replicas are running the same program, a ReplicaId()
call in the program will return a different value on each replica.
Reshape
See also XlaBuilder::Reshape
and the Collapse
operation.
Reshapes the dimensions of an array into a new configuration.
Reshape(operand, new_sizes)
Reshape(operand, dimensions, new_sizes)
Arguments | Type | Semantics |
---|---|---|
operand | XlaOp | array of type T |
dimensions | int64 vector | order in which dimensions are collapsed |
new_sizes | int64 vector | vector of sizes of new dimensions |
Conceptually, reshape first flattens an array into a one-dimensional vector of data values, and then refines this vector into a new shape. The input arguments are an arbitrary array of type T, a compile-time-constant vector of dimension indices, and a compile-time-constant vector of dimension sizes for the result. The values in the dimension
vector, if given, must be a permutation of all of T's dimensions; the default if not given is {0, ..., rank - 1}
. The order of the dimensions in dimensions
is from slowest-varying dimension (most major) to fastest-varying dimension (most minor) in the loop nest which collapses the input array into a single dimension. The new_sizes
vector determines the size of the output array. The value at index 0 in new_sizes
is the size of dimension 0, the value at index 1 is the size of dimension 1, and so on. The product of the new_size
dimensions must equal the product of the operand's dimension sizes. When refining the collapsed array into the multidimensional array defined by new_sizes
, the dimensions in new_sizes
are ordered from slowest varying (most major) and to fastest varying (most minor).
For example, let v be an array of 24 elements:
let v = f32[4x2x3] { { {10, 11, 12}, {15, 16, 17} },
{ {20, 21, 22}, {25, 26, 27} },
{ {30, 31, 32}, {35, 36, 37} },
{ {40, 41, 42}, {45, 46, 47} } };
In-order collapse:
let v012_24 = Reshape(v, {0,1,2}, {24});
then v012_24 == f32[24] {10, 11, 12, 15, 16, 17, 20, 21, 22, 25, 26, 27,
30, 31, 32, 35, 36, 37, 40, 41, 42, 45, 46, 47};
let v012_83 = Reshape(v, {0,1,2}, {8,3});
then v012_83 == f32[8x3] { {10, 11, 12}, {15, 16, 17},
{20, 21, 22}, {25, 26, 27},
{30, 31, 32}, {35, 36, 37},
{40, 41, 42}, {45, 46, 47} };
Out-of-order collapse:
let v021_24 = Reshape(v, {1,2,0}, {24});
then v012_24 == f32[24] {10, 20, 30, 40, 11, 21, 31, 41, 12, 22, 32, 42,
15, 25, 35, 45, 16, 26, 36, 46, 17, 27, 37, 47};
let v021_83 = Reshape(v, {1,2,0}, {8,3});
then v021_83 == f32[8x3] { {10, 20, 30}, {40, 11, 21},
{31, 41, 12}, {22, 32, 42},
{15, 25, 35}, {45, 16, 26},
{36, 46, 17}, {27, 37, 47} };
let v021_262 = Reshape(v, {1,2,0}, {2,6,2});
then v021_262 == f32[2x6x2] { { {10, 20}, {30, 40},
{11, 21}, {31, 41},
{12, 22}, {32, 42} },
{ {15, 25}, {35, 45},
{16, 26}, {36, 46},
{17, 27}, {37, 47} } };
As a special case, reshape can transform a single-element array to a scalar and vice versa. For example,
Reshape(f32[1x1] { {5} }, {0,1}, {}) == 5;
Reshape(5, {}, {1,1}) == f32[1x1] { {5} };
Rev (reverse)
See also XlaBuilder::Rev
.
Rev(operand, dimensions)
Arguments | Type | Semantics |
---|---|---|
operand | XlaOp | array of type T |
dimensions | ArraySlice<int64> | dimensions to reverse |
Reverses the order of elements in the operand
array along the specified dimensions
, generating an output array of the same shape. Each element of the operand array at a multidimensional index is stored into the output array at a transformed index. The multidimensional index is transformed by reversing the index in each dimension to be reversed (ie, if a dimension of size N is one of the reversing dimensions, its index i is transformed into N - 1 - i).
One use for the Rev
operation is to reverse the convolution weight array along the two window dimensions during the gradient computation in neural networks.
RngNormal
See also XlaBuilder::RngNormal
.
Constructs an output of a given shape with random numbers generated following the \(N(\mu, \sigma)\) normal distribution. The parameters \(\mu\) and\(\sigma\), and output shape have to have a floating point elemental type. The parameters furthermore have to be scalar valued.
RngNormal(mu, sigma, shape)
Arguments | Type | Semantics |
---|---|---|
mu | XlaOp | Scalar of type T specifying mean of generated numbers |
sigma | XlaOp | Scalar of type T specifying standard deviation of generated numbers |
shape | Shape | Output shape of type T |
RngUniform
See also XlaBuilder::RngUniform
.
Constructs an output of a given shape with random numbers generated following the uniform distribution over the interval \([a,b)\). The parameters and output element type have to be a boolean type, an integral type or a floating point types, and the types have to be consistent. The CPU and GPU backends currently only support F64, F32, F16, BF16, S64, U64, S32 and U32. Furthermore, the parameters need to be scalar valued. If \(b <= a\) the result is implementation-defined.
RngUniform(a, b, shape)
Arguments | Type | Semantics |
---|---|---|
a | XlaOp | Scalar of type T specifying lower limit of interval |
b | XlaOp | Scalar of type T specifying upper limit of interval |
shape | Shape | Output shape of type T |
RngBitGenerator
Generates an output with a given shape filled with uniform random bits using the specified algorithm (or backend default) and returns an updated state (with the same shape as initial state) and the generated random data.
Initial state is the initial state of the current random number generation. It and the required shape and valid values are dependent on the algorithm used.
The output is guaranteed to be a deterministic function of the initial state but it is not guaranteed to be deterministic between backends and different compiler versions.
RngBitGenerator(algorithm, key, shape)
Arguments | Type | Semantics |
---|---|---|
algorithm | RandomAlgorithm | PRNG algorithm to be used. |
initial_state | XlaOp | Initial state for the PRNG algorithm. |
shape | Shape | Output shape for generated data. |
Available values for algorithm
:
rng_default
: Backend specific algorithm with backend specific shape requirements.rng_three_fry
: ThreeFry counter-based PRNG algorithm. Theinitial_state
shape isu64[2]
with arbitrary values. Salmon et al. SC 2011. Parallel random numbers: as easy as 1, 2, 3.rng_philox
: Philox algorithm to generate random numbers in parallel. Theinitial_state
shape isu64[3]
with arbitrary values. Salmon et al. SC 2011. Parallel random numbers: as easy as 1, 2, 3.
Scatter
The XLA scatter operation generates a sequence of results which are the values of the input array operands
, with several slices (at indices specified by scatter_indices
) updated with the sequence of values in updates
using update_computation
.
See also XlaBuilder::Scatter
.
scatter(operands..., scatter_indices, updates..., update_computation, index_vector_dim, update_window_dims, inserted_window_dims, scatter_dims_to_operand_dims)
Arguments | Type | Semantics |
---|---|---|
operands | Sequence of N XlaOp | N arrays of types T_0, ..., T_N to be scattered into. |
scatter_indices | XlaOp | Array containing the starting indices of the slices that must be scattered to. |
updates | Sequence of N XlaOp | N arrays of types T_0, ..., T_N . updates[i] contains the values that must be used for scattering operands[i] . |
update_computation | XlaComputation | Computation to be used for combining the existing values in the input array and the updates during scatter. This computation should be of type T_0, ..., T_N, T_0, ..., T_N -> Collate(T_0, ..., T_N) . |
index_vector_dim | int64 | The dimension in scatter_indices that contains the starting indices. |
update_window_dims | ArraySlice<int64> | The set of dimensions in updates shape that are window dimensions . |
inserted_window_dims | ArraySlice<int64> | The set of window dimensions that must be inserted into updates shape. |
scatter_dims_to_operand_dims | ArraySlice<int64> | A dimensions map from the scatter indices to the operand index space. This array is interpreted as mapping i to scatter_dims_to_operand_dims[i] . It has to be one-to-one and total. |
indices_are_sorted | bool | Whether the indices are guaranteed to be sorted by the caller. |
Where:
- N is required to be greater or equal to 1.
-
operands
[0
], ...,operands
[N-1
] must all have the same dimensions. -
updates
[0
], ...,updates
[N-1
] must all have the same dimensions. - If
N = 1
,Collate(T)
isT
. - If
N > 1
,Collate(T_0, ..., T_N)
is a tuple ofN
elements of typeT
.
If index_vector_dim
is equal to scatter_indices.rank
we implicitly consider scatter_indices
to have a trailing 1
dimension.
We define update_scatter_dims
of type ArraySlice<int64>
as the set of dimensions in updates
shape that are not in update_window_dims
, in ascending order.
The arguments of scatter should follow these constraints:
Each
updates
array must be of rankupdate_window_dims.size + scatter_indices.rank - 1
.Bounds of dimension
i
in eachupdates
array must conform to the following:- If
i
is present inupdate_window_dims
(ie equal toupdate_window_dims
[k
] for somek
), then the bound of dimensioni
inupdates
must not exceed the corresponding bound ofoperand
after accounting for theinserted_window_dims
(ieadjusted_window_bounds
[k
], whereadjusted_window_bounds
contains the bounds ofoperand
with the bounds at indicesinserted_window_dims
removed). - If
i
is present inupdate_scatter_dims
(ie equal toupdate_scatter_dims
[k
] for somek
), then the bound of dimensioni
inupdates
must be equal to the corresponding bound ofscatter_indices
, skippingindex_vector_dim
(iescatter_indices.shape.dims
[k
], ifk
<index_vector_dim
andscatter_indices.shape.dims
[k+1
] otherwise).
- If
update_window_dims
must be in ascending order, not have any repeating dimension numbers, and be in the range[0, updates.rank)
.inserted_window_dims
must be in ascending order, not have any repeating dimension numbers, and be in the range[0, operand.rank)
.operand.rank
must equal the sum ofupdate_window_dims.size
andinserted_window_dims.size
.scatter_dims_to_operand_dims.size
must be equal toscatter_indices.shape.dims
[index_vector_dim
], and its values must be in the range[0, operand.rank)
.
For a given index U
in each updates
array, the corresponding index I
in the corresponding operands
array into which this update has to be applied is computed as follows:
- Let
G
= {U
[k
] fork
inupdate_scatter_dims
}. UseG
to look up an index vectorS
in thescatter_indices
array such thatS
[i
] =scatter_indices
[Combine(G
,i
)] where Combine(A, b) inserts b at positionsindex_vector_dim
into A. - Create an index
S
in
intooperand
usingS
by scatteringS
using thescatter_dims_to_operand_dims
map. More formally:-
S
in
[scatter_dims_to_operand_dims
[k
]] =S
[k
] ifk
<scatter_dims_to_operand_dims.size
. -
S
in
[_
] =0
otherwise.
-
- Create an index
W
in
into eachoperands
array by scattering the indices atupdate_window_dims
inU
according toinserted_window_dims
. More formally:-
W
in
[window_dims_to_operand_dims
(k
)] =U
[k
] ifk
is inupdate_window_dims
, wherewindow_dims_to_operand_dims
is the monotonic function with domain [0
,update_window_dims.size
) and range [0
,operand.rank
) \inserted_window_dims
. (For example, ifupdate_window_dims.size
is4
,operand.rank
is6
, andinserted_window_dims
is {0
,2
} thenwindow_dims_to_operand_dims
is {0
→1
,1
→3
,2
→4
,3
→5
}). -
W
in
[_
] =0
otherwise.
-
-
I
isW
in
+S
in
where + is element-wise addition.
In summary, the scatter operation can be defined as follows.
- Initialize
output
withoperands
, ie for all indicesJ
, for all indicesO
in theoperands
[J
] array:
output
[J
][O
] =operands
[J
][O
] - For every index
U
in theupdates
[J
] array and the corresponding indexO
in theoperand
[J
] array, ifO
is a valid index foroutput
:
(output
[0
][O
], ...,output
[N-1
][O
]) =update_computation
(output
[0
][O
], ..., ,output
[N-1
][O
],updates
[0
][U
], ...,updates
[N-1
][U
])
The order in which updates are applied is non-deterministic. So, when multiple indices in updates
refer to the same index in operands
, the corresponding value in output
will be non-deterministic.
Note that the first parameter that is passed into the update_computation
will always be the current value from the output
array and the second parameter will always be the value from the updates
array. This is important specifically for cases when the update_computation
is not commutative .
If indices_are_sorted
is set to true then XLA can assume that start_indices
are sorted (in ascending start_index_map
order) by the user. If they are not then the semantics is implementation defined.
Informally, the scatter op can be viewed as an inverse of the gather op, ie the scatter op updates the elements in the input that are extracted by the corresponding gather op.
For a detailed informal description and examples, refer to the "Informal Description" section under Gather
.
Select
See also XlaBuilder::Select
.
Constructs an output array from elements of two input arrays, based on the values of a predicate array.
Select(pred, on_true, on_false)
Arguments | Type | Semantics |
---|---|---|
pred | XlaOp | array of type PRED |
on_true | XlaOp | array of type T |
on_false | XlaOp | array of type T |
The arrays on_true
and on_false
must have the same shape. This is also the shape of the output array. The array pred
must have the same dimensionality as on_true
and on_false
, with the PRED
element type.
For each element P
of pred
, the corresponding element of the output array is taken from on_true
if the value of P
is true
, and from on_false
if the value of P
is false
. As a restricted form of broadcasting , pred
can be a scalar of type PRED
. In this case, the output array is taken wholly from on_true
if pred
is true
, and from on_false
if pred
is false
.
Example with non-scalar pred
:
let pred: PRED[4] = {true, false, false, true};
let v1: s32[4] = {1, 2, 3, 4};
let v2: s32[4] = {100, 200, 300, 400};
==>
Select(pred, v1, v2) = s32[4]{1, 200, 300, 4};
Example with scalar pred
:
let pred: PRED = true;
let v1: s32[4] = {1, 2, 3, 4};
let v2: s32[4] = {100, 200, 300, 400};
==>
Select(pred, v1, v2) = s32[4]{1, 2, 3, 4};
Selections between tuples are supported. Tuples are considered to be scalar types for this purpose. If on_true
and on_false
are tuples (which must have the same shape!) then pred
has to be a scalar of type PRED
.
SelectAndScatter
See also XlaBuilder::SelectAndScatter
.
This operation can be considered as a composite operation that first computes ReduceWindow
on the operand
array to select an element from each window, and then scatters the source
array to the indices of the selected elements to construct an output array with the same shape as the operand array. The binary select
function is used to select an element from each window by applying it across each window, and it is called with the property that the first parameter's index vector is lexicographically less than the second parameter's index vector. The select
function returns true
if the first parameter is selected and returns false
if the second parameter is selected, and the function must hold transitivity (ie, if select(a, b)
and select(b, c)
are true
, then select(a, c)
is also true
) so that the selected element does not depend on the order of the elements traversed for a given window.
The function scatter
is applied at each selected index in the output array. It takes two scalar parameters:
- Current value at the selected index in the output array
- The scatter value from
source
that applies to the selected index
It combines the two parameters and returns a scalar value that's used to update the value at the selected index in the output array. Initially, all indices of the output array are set to init_value
.
The output array has the same shape as the operand
array and the source
array must have the same shape as the result of applying a ReduceWindow
operation on the operand
array. SelectAndScatter
can be used to backpropagate the gradient values for a pooling layer in a neural network.
SelectAndScatter(operand, select, window_dimensions, window_strides, padding, source, init_value, scatter)
Arguments | Type | Semantics |
---|---|---|
operand | XlaOp | array of type T over which the windows slide |
select | XlaComputation | binary computation of type T, T -> PRED , to apply to all elements in each window; returns true if the first parameter is selected and returns false if the second parameter is selected |
window_dimensions | ArraySlice<int64> | array of integers for window dimension values |
window_strides | ArraySlice<int64> | array of integers for window stride values |
padding | Padding | padding type for window (Padding::kSame or Padding::kValid) |
source | XlaOp | array of type T with the values to scatter |
init_value | XlaOp | scalar value of type T for the initial value of the output array |
scatter | XlaComputation | binary computation of type T, T -> T , to apply each scatter source element with its destination element |
The figure below shows examples of using SelectAndScatter
, with the select
function computing the maximal value among its parameters. Note that when the windows overlap, as in the figure (2) below, an index of the operand
array may be selected multiple times by different windows. In the figure, the element of value 9 is selected by both of the top windows (blue and red) and the binary addition scatter
function produces the output element of value 8 (2 + 6).

The evaluation order of the scatter
function is arbitrary and may be non-deterministic. Therefore, the scatter
function should not be overly sensitive to reassociation. See the discussion about associativity in the context of Reduce
for more details.
Send
See also XlaBuilder::Send
.
Send(operand, channel_handle)
Arguments | Type | Semantics |
---|---|---|
operand | XlaOp | data to send (array of type T) |
channel_handle | ChannelHandle | unique identifier for each send/recv pair |
Sends the given operand data to a Recv
instruction in another computation that shares the same channel handle. Does not return any data.
Similar to the Recv
operation, the client API of Send
operation represents synchronous communication, and is internally decomposed into 2 HLO instructions ( Send
and SendDone
) to enable asynchronous data transfers. See also HloInstruction::CreateSend
and HloInstruction::CreateSendDone
.
Send(HloInstruction operand, int64 channel_id)
Initiates an asynchronous transfer of the operand to the resources allocated by the Recv
instruction with the same channel id. Returns a context, which is used by a following SendDone
instruction to wait for the completion of the data transfer. The context is a tuple of {operand (shape), request identifier (U32)} and it can only be used by a SendDone
instruction.
SendDone(HloInstruction context)
Given a context created by a Send
instruction, waits for the data transfer to complete. The instruction does not return any data.
Scheduling of channel instructions
The execution order of the 4 instructions for each channel ( Recv
, RecvDone
, Send
, SendDone
) is as below.

-
Recv
happens beforeSend
-
Send
happens beforeRecvDone
-
Recv
happens beforeRecvDone
-
Send
happens beforeSendDone
When the backend compilers generate a linear schedule for each computation that communicates via channel instructions, there must not be cycles across the computations. For example, below schedules lead to deadlocks.

Slice
See also XlaBuilder::Slice
.
Slicing extracts a sub-array from the input array. The sub-array is of the same rank as the input and contains the values inside a bounding box within the input array where the dimensions and indices of the bounding box are given as arguments to the slice operation.
Slice(operand, start_indices, limit_indices, strides)
Arguments | Type | Semantics |
---|---|---|
operand | XlaOp | N dimensional array of type T |
start_indices | ArraySlice<int64> | List of N integers containing the starting indices of the slice for each dimension. Values must be greater than or equal to zero. |
limit_indices | ArraySlice<int64> | List of N integers containing the ending indices (exclusive) for the slice for each dimension. Each value must be greater than or equal to the respective start_indices value for the dimension and less than or equal to the size of the dimension. |
strides | ArraySlice<int64> | List of N integers that decides the input stride of the slice. The slice picks every strides[d] element in dimension d . |
1-dimensional example:
let a = {0.0, 1.0, 2.0, 3.0, 4.0}
Slice(a, {2}, {4}) produces:
{2.0, 3.0}
2-dimensional example:
let b =
{ {0.0, 1.0, 2.0},
{3.0, 4.0, 5.0},
{6.0, 7.0, 8.0},
{9.0, 10.0, 11.0} }
Slice(b, {2, 1}, {4, 3}) produces:
{ { 7.0, 8.0},
{10.0, 11.0} }
Sort
See also XlaBuilder::Sort
.
Sort(operands, comparator, dimension, is_stable)
Arguments | Type | Semantics |
---|---|---|
operands | ArraySlice<XlaOp> | The operands to sort. |
comparator | XlaComputation | The comparator computation to use. |
dimension | int64 | The dimension along which to sort. |
is_stable | bool | Whether stable sorting should be used. |
If only one operand is provided:
If the operand is a rank-1 tensor (an array), the result is a sorted array. If you want to sort the array into ascending order, the comparator should perform a less-than comparison. Formally, after the array is sorted, it holds for all index positions
i, j
withi < j
that eithercomparator(value[i], value[j]) = comparator(value[j], value[i]) = false
orcomparator(value[i], value[j]) = true
.If the operand has higher rank, the operand is sorted along the provided dimension. For example, for a rank-2 tensor (a matrix), a dimension value of
0
will independently sort every column, and a dimension value of1
will independently sort each row. If no dimension number is provided, then the last dimension is chosen by default. For the dimension which is sorted, the same sorting order applies as in the rank-1 case.
If n > 1
operands are provided:
All
n
operands must be tensors with the same dimensions. The element types of the tensors may be different.All operands are sorted together, not individually. Conceptually the operands are treated as a tuple. When checking whether the elements of each operand at index positions
i
andj
need to be swapped, the comparator is called with2 * n
scalar parameters, where parameter2 * k
corresponds to the value at positioni
from thek-th
operand, and parameter2 * k + 1
corresponds to the value at positionj
from thek-th
operand. Usually, the comparator would thus compare parameters2 * k
and2 * k + 1
with each other and possibly use other parameter pairs as tie breakers.The result is a tuple that consists of the operands in sorted order (along the provided dimension, as above). The
i-th
operand of the tuple corresponds to thei-th
operand of Sort.
For example, if there are three operands operand0 = [3, 1]
, operand1 = [42, 50]
, operand2 = [-3.0, 1.1]
, and the comparator compares only the values of operand0
with less-than, then the output of the sort is the tuple ([1, 3], [50, 42], [1.1, -3.0])
.
If is_stable
is set to true, the sort is guaranteed to be stable, that is, if there are elements which are considered to be equal by the comparator, the relative order of the equal values is preserved. Two elements e1
and e2
are equal if and only if comparator(e1, e2) = comparator(e2, e1) = false
. By default, is_stable
is set to false.
Top-K
See also the jax.lax.top_k
operation.
TopK(operand)
Arguments | Type | Semantics |
---|---|---|
operand | XlaOp | N-dimensional array |
k | int64 | Integer specifying the number of top entries. |
comparator | XlaComputation | The comparator computation to use. |
Returns top k
values and their indices as a tuple, along the last dimension of the operand using the given comparator
(for usual topk behavior, it should be strict-greater-than operation).
For example, given strict >
operator, k=1
and the following operand of shape f32[2,3]
:
[[0.1, 0.3, 0.1], [0.7, 0.2, -0.1]]
The TopK application returns the following tuple of shape (f32[2,1], s32[2,1])
:
([[0.3], [0.7]], [[1], [0]])
Transpose
See also the tf.reshape
operation.
Transpose(operand)
Arguments | Type | Semantics |
---|---|---|
operand | XlaOp | The operand to transpose. |
permutation | ArraySlice<int64> | How to permute the dimensions. |
Permutes the operand dimensions with the given permutation, so ∀ i . 0 ≤ i < rank ⇒ input_dimensions[permutation[i]] = output_dimensions[i]
.
This is the same as Reshape(operand, permutation, Permute(permutation, operand.shape.dimensions)).
TriangularSolve
See also XlaBuilder::TriangularSolve
.
Solves systems of linear equations with lower or upper triangular coefficient matrices by forward- or back-substitution. Broadcasting along leading dimensions, this routine solves one of the matrix systems op(a) * x = b
, or x * op(a) = b
, for the variable x
, given a
and b
, where op(a)
is either op(a) = a
, or op(a) = Transpose(a)
, or op(a) = Conj(Transpose(a))
.
TriangularSolve(a, b, left_side, lower, unit_diagonal, transpose_a)
Arguments | Type | Semantics |
---|---|---|
a | XlaOp | a rank > 2 array of a complex or floating-point type with shape [..., M, M] . |
b | XlaOp | a rank > 2 array of the same type with shape [..., M, K] if left_side is true, [..., K, M] otherwise. |
left_side | bool | indicates whether to solve a system of the form op(a) * x = b ( true ) or x * op(a) = b ( false ). |
lower | bool | whether to use the upper or lower triangle of a . |
unit_diagonal | bool | if true , the diagonal elements of a are assumed to be 1 and not accessed. |
transpose_a | Transpose | whether to use a as is, transpose it or take its conjugate transpose. |
Input data is read only from the lower/upper triangle of a
, depending on the value of lower
. Values from the other triangle are ignored. Output data is returned in the same triangle; the values in the other triangle are implementation-defined and may be anything.
If the rank of a
and b
are greater than 2, they are treated as batches of matrices, where all except the minor 2 dimensions are batch dimensions. a
and b
must have equal batch dimensions.
Tuple
See also XlaBuilder::Tuple
.
A tuple containing a variable number of data handles, each of which has its own shape.
This is analogous to std::tuple
in C++. Conceptually:
let v: f32[10] = f32[10]{0, 1, 2, 3, 4, 5, 6, 7, 8, 9};
let s: s32 = 5;
let t: (f32[10], s32) = tuple(v, s);
Tuples can be deconstructed (accessed) via the GetTupleElement
operation.
While
See also XlaBuilder::While
.
While(condition, body, init)
Arguments | Type | Semantics |
---|---|---|
condition | XlaComputation | XlaComputation of type T -> PRED which defines the termination condition of the loop. |
body | XlaComputation | XlaComputation of type T -> T which defines the body of the loop. |
init | T | Initial value for the parameter of condition and body . |
Sequentially executes the body
until the condition
fails. This is similar to a typical while loop in many other languages except for the differences and restrictions listed below.
- A
While
node returns a value of typeT
, which is the result from the last execution of thebody
. - The shape of the type
T
is statically determined and must be the same across all iterations.
The T parameters of the computations are initialized with the init
value in the first iteration and are automatically updated to the new result from body
in each subsequent iteration.
One main use case of the While
node is to implement the repeated execution of training in neural networks. Simplified pseudocode is shown below with a graph that represents the computation. The code can be found in while_test.cc
. The type T
in this example is a Tuple
consisting of an int32
for the iteration count and a vector[10]
for the accumulator. For 1000 iterations, the loop keeps adding a constant vector to the accumulator.
// Pseudocode for the computation.
init = {0, zero_vector[10]} // Tuple of int32 and float[10].
result = init;
while (result(0) < 1000) {
iteration = result(0) + 1;
new_vector = result(1) + constant_vector[10];
result = {iteration, new_vector};
}
