pretrained.causal_hubert
Defines an API for interacting with a causal HuBERT model.
This model is trained to predict HuBERT tokens from the previous N audio embedding vectors, rather than using a bidirectional transformer. This lends itself better to real-time applications, as the model can be run in a causal manner.
One difference from the original HuBERT model is that this model uses a convolutional encoder wich kernel sizes matching the stride. While this can have worse performance than the original convolutional encoder, it allows us to process chunks of audio as they come in.
from pretrained.causal_hubert import pretrained_causal_hubert
model = pretrained_causal_hubert("base-conv-encoder")
state = None
for waveform_chunk in waveform_chunks:
tokens, state = model(waveform_chunk, state)
- pretrained.causal_hubert.cast_pretrained_causal_hubert_key(s: str) Literal['base-conv-encoder', 'base-linear-encoder', 'base-linear-encoder-better'] [source]
- class pretrained.causal_hubert.SelfAttentionState(key, value)[source]
Bases:
NamedTuple
Create new instance of SelfAttentionState(key, value)
- key: Tensor
Alias for field number 0
- value: Tensor
Alias for field number 1
- class pretrained.causal_hubert.CausalHubertState(offset, waveform_leftover, attn_states)[source]
Bases:
NamedTuple
Create new instance of CausalHubertState(offset, waveform_leftover, attn_states)
- offset: int
Alias for field number 0
- waveform_leftover: Tensor
Alias for field number 1
- attn_states: list[pretrained.causal_hubert.SelfAttentionState]
Alias for field number 2
- class pretrained.causal_hubert.Attention(hidden_size: int, num_heads: int, local_attn: int, dropout: float = 0.0, layer_norm_eps: float = 1e-05)[source]
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x: Tensor, mask: Tensor, state: SelfAttentionState | None = None) tuple[torch.Tensor, pretrained.causal_hubert.SelfAttentionState] [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pretrained.causal_hubert.FeedForward(hidden_size: int, dim_feedforward: int, dropout: float = 0.0, layer_norm_eps: float = 1e-05)[source]
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x: Tensor) Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pretrained.causal_hubert.SelfAttentionLayer(hidden_size: int, num_heads: int, dim_feedforward: int, local_attn: int, dropout: float = 0.0, layer_norm_eps: float = 1e-05)[source]
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x: Tensor, mask: Tensor, state: SelfAttentionState | None = None) tuple[torch.Tensor, pretrained.causal_hubert.SelfAttentionState] [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pretrained.causal_hubert.SelfAttention(hidden_size: int, num_heads: int, dim_feedforward: int, num_layers: int, local_attn: int, max_tsz: int = 2048)[source]
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- mask: Tensor
- forward(x: Tensor, states: list[pretrained.causal_hubert.SelfAttentionState] | None = None) tuple[torch.Tensor, list[pretrained.causal_hubert.SelfAttentionState]] [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pretrained.causal_hubert.ConvExtractor(hidden_size: int, conv_dim: tuple[int, ...] = (512, 512, 512, 512, 512, 512, 512), conv_stride: tuple[int, ...] = (5, 2, 2, 2, 2, 2, 2), conv_bias: bool = True, feat_extract_norm: Literal['group', 'layer'] = 'layer', feat_extract_activation: Literal['no_act', 'relu', 'relu6', 'relu2', 'clamp6', 'leaky_relu', 'elu', 'celu', 'selu', 'gelu', 'gelu_fast', 'sigmoid', 'log_sigmoid', 'hard_sigomid', 'tanh', 'softsign', 'softplus', 'silu', 'mish', 'swish', 'hard_swish', 'soft_shrink', 'hard_shrink', 'tanh_shrink', 'soft_sign', 'relu_squared', 'laplace'] = 'gelu', layer_norm_eps: float = 1e-05, feat_proj_dropout: float = 0.0, feat_proj_layer_norm: bool = True)[source]
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(waveform: Tensor) Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pretrained.causal_hubert.LinearExtractor(hidden_size: int, receptive_field_size: int)[source]
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(waveform: Tensor) Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pretrained.causal_hubert.CausalHubert(hidden_size: int, num_heads: int, dim_feedforward: int, num_layers: int, num_hubert_tokens: int, local_attn: int, extractor: ConvExtractor | LinearExtractor, max_tsz: int = 2048)[source]
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(waveform: Tensor, state: CausalHubertState | None = None) tuple[torch.Tensor, pretrained.causal_hubert.CausalHubertState] [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- predictor() CausalHubertPredictor [source]
- class pretrained.causal_hubert.CausalHubertPredictor(hubert: CausalHubert)[source]
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(waveform: Tensor, state: CausalHubertState | None = None) tuple[torch.Tensor, pretrained.causal_hubert.CausalHubertState] [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- pretrained.causal_hubert.pretrained_causal_hubert(size: Literal['base-conv-encoder', 'base-linear-encoder', 'base-linear-encoder-better'], load_weights: bool = True) CausalHubert [source]