Skip to content

Dynamic Blocks#

Influence-aware block partitioning and per-temperature block configuration. Use these when different temperature levels benefit from different block granularity.

hamon.compute_aggregate_influence(edges: list[tuple[AbstractNode, AbstractNode]], weights: np.ndarray | jax.Array, beta: float, nodes: list[AbstractNode]) -> tuple[np.ndarray, np.ndarray] #

Compute per-node aggregate influence A(w) = Σ_{z~w} Γ_{w,z}.

Returns (aggregate_influence, edge_influence) both as numpy arrays. Heavy nodes (A(w) > threshold) should be buried inside blocks.

hamon.influence_aware_partition(nodes: list[AbstractNode], edges: list[tuple[AbstractNode, AbstractNode]], weights: np.ndarray, beta: float, max_block_size: int = 16, buffer_depth: int = 2) -> list[list[AbstractNode]] #

Build blocks where heavy nodes are interior, light nodes form boundary.

Greedy BFS from heaviest unassigned node outward until max_size.

hamon.per_temperature_block_config(betas: Sequence[float], beta_c: float = 0.4407, min_size: int = 1, max_size: int = 16) -> list[int] #

Generate block size recommendations for each temperature in a PT chain.

Returns list of recommended block sizes, one per chain.

hamon.dynamic_reblock(nodes: list[AbstractNode], edges: list[tuple[AbstractNode, AbstractNode]], current_blocks: list[list[AbstractNode]], samples: jax.Array, max_block_size: int = 16, correlation_threshold: float = 0.3) -> list[list[AbstractNode]] #

Re-partition blocks based on empirical correlations from recent samples.

This is the main entry point for dynamic re-blocking. Call periodically during NRPT (e.g. every 50-100 rounds) to adapt blocks to the actual correlation structure at each temperature.

Parameters:

Name Type Description Default
nodes list[AbstractNode]

all nodes in the model

required
edges list[tuple[AbstractNode, AbstractNode]]

all edges

required
current_blocks list[list[AbstractNode]]

current block partition (list of node lists)

required
samples Array

(n_samples, n_nodes) recent samples from this chain

required
max_block_size int

maximum nodes per block

16
correlation_threshold float

minimum correlation to trigger merge

0.3

Returns:

Type Description
list[list[AbstractNode]]

New block partition (list of node lists).

hamon.classify_nodes(aggregate_influence: np.ndarray, threshold: float | None = None) -> tuple[np.ndarray, np.ndarray] #

Split nodes into heavy (high influence) and light sets.

Default threshold: median aggregate influence. Returns (heavy_indices, light_indices).