This page provides information about the nodes in the cluster.
Name | Type |
---|---|
node | int2 |
slices:partial | int2 |
slices:full | int2 |
disks:count | int2 |
disks:size | int8 |
disks:total | int8 |
store:blocks:sorted | int8 |
store:blocks:unsorted | int8 |
store:blocks:total | int8 |
store:rows:sorted | int8 |
store:rows:unsorted | int8 |
store:rows:total | int8 |
i/o:bytes read:disk | int8 |
i/o:bytes read:network | int8 |
i/o:rows:inserted | int8 |
i/o:rows:deleted | int8 |
i/o:rows:returned | int8 |
i/o:bytes processed:in memory | int8 |
i/o:bytes processed:on disk | int8 |
The node ID. Nodes count from 0 and are contiguous.
The number of partial slices. These are compute
slices
in Amazon terminology.
See clusters, nodes and slices.
The number of full slices. These are data
slices in
Amazon terminology.
See clusters, nodes and slices.
The total number of disks.
The size of the disks. All disks have the same size.
The total size of disk (count multiplied by size).
With ra3
node types, this is simply the amount of local
disk, and all of it used in conjunction with RMS to store blocks (it’s
almost a cache - as far as I can tell, and I have investigated, it
behaves like an LRU cache, except that blocks are not
copied from RMS (and so exist in RMS and in cache) but are
moved from RMS. A block exists in one place at a time only. I
suspect this is a consequence of legacy constraints in the code
base.
With the other node types, there is a significant difference between the store indicated in the node specification and the actual amount of store. The actual amount of store is over-provisioned, something like 10% to 15% larger than the store indicated in the specification. There is no indication in the system tables of the cluster node type, and so it is not possible to know the specification size.
As such, the value in this column is the over-provisioned size.
The number of sorted blocks.
The number of unsorted blocks.
The total number of blocks.
The number of sorted rows.
The number of unsorted rows.
The total number of rows.
This column then shows the total number of bytes read from disk, as
best I can judge the types indicated in stl_scan
.
This column then shows the total number of bytes read from network,
as best I can judge the types indicated in stl_scan
.
Importantly, it only counts the receive side of network
activity - the step is scan
, after all, not
broadcast
, so we’re not counting bytes twice.
The number of rows inserted into the table.
For tables with all
distribution, this is the physical
number of rows (i.e. one per node), not the logical number of rows.
The number of rows deleted from the table.
For tables with all
distribution, this is the physical
number of rows (i.e. one per node), not the logical number of rows.
It is the leader node returns rows to the SQL client, so for this page, this column is the number of rows returned to the leader node.
This column then shows the total number of bytes processed by the
stl_aggr
, stl_hash
, stl_save
,
stl_sort
and stl_unique
steps, when running in
memory rather than on disk.
This column then shows the total number of bytes processed by the
stl_aggr
, stl_hash
, stl_save
,
stl_sort
and stl_unique
steps, when running on
disk rather than in memory.