CharRNNCell¶
- class mlpractice.rnn_torch.CharRNNCell(num_tokens, embedding_size=16, rnn_num_units=64)¶
Vanilla RNN cell with tanh non-linearity.
- Parameters
- num_tokensint
Size of the token dictionary.
- embedding_sizeint
Size of the token embedding vector.
- rnn_num_unitsint
A number of features in the hidden state vector.
- Attributes
- num_unitsint
A number of features in the hidden state vector.
- embeddingnn.Embedding
An embedding layer that converts character id to a vector.
- rnn_updatenn.Linear
A linear layer that creates a new hidden state vector.
- rnn_to_logitsnn.Linear
An output layer that predicts probabilities of next phoneme.
Methods
add_module
(name, module)Adds a child module to the current module.
apply
(fn)Applies
fn
recursively to every submodule (as returned by.children()
) as well as self.bfloat16
()Casts all floating point parameters and buffers to
bfloat16
datatype.buffers
([recurse])Returns an iterator over module buffers.
children
()Returns an iterator over immediate children modules.
cpu
()Moves all model parameters and buffers to the CPU.
cuda
([device])Moves all model parameters and buffers to the GPU.
double
()Casts all floating point parameters and buffers to
double
datatype.eval
()Sets the module in evaluation mode.
extra_repr
()Set the extra representation of the module
float
()Casts all floating point parameters and buffers to
float
datatype.forward
(x, h_prev)Compute h_next(x, h_prev) and log(P(x_next | h_next)).
get_buffer
(target)Returns the buffer given by
target
if it exists, otherwise throws an error.get_extra_state
()Returns any extra state to include in the module's state_dict.
get_parameter
(target)Returns the parameter given by
target
if it exists, otherwise throws an error.get_submodule
(target)Returns the submodule given by
target
if it exists, otherwise throws an error.half
()Casts all floating point parameters and buffers to
half
datatype.initial_state
(batch_size)Returns rnn state before it processes first input (aka h_0).
load_state_dict
(state_dict[, strict])Copies parameters and buffers from
state_dict
into this module and its descendants.modules
()Returns an iterator over all modules in the network.
named_buffers
([prefix, recurse])Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children
()Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix, remove_duplicate])Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse])Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters
([recurse])Returns an iterator over module parameters.
register_backward_hook
(hook)Registers a backward hook on the module.
register_buffer
(name, tensor[, persistent])Adds a buffer to the module.
register_forward_hook
(hook)Registers a forward hook on the module.
register_forward_pre_hook
(hook)Registers a forward pre-hook on the module.
register_full_backward_hook
(hook)Registers a backward hook on the module.
register_parameter
(name, param)Adds a parameter to the module.
requires_grad_
([requires_grad])Change if autograd should record operations on parameters in this module.
set_extra_state
(state)This function is called from
load_state_dict()
to handle any extra state found within the state_dict.share_memory
()See
torch.Tensor.share_memory_()
state_dict
([destination, prefix, keep_vars])Returns a dictionary containing a whole state of the module.
to
(*args, **kwargs)Moves and/or casts the parameters and buffers.
to_empty
(*, device)Moves the parameters and buffers to the specified device without copying storage.
train
([mode])Sets the module in training mode.
type
(dst_type)Casts all parameters and buffers to
dst_type
.xpu
([device])Moves all model parameters and buffers to the XPU.
zero_grad
([set_to_none])Sets gradients of all model parameters to zero.
__call__
- forward(x, h_prev)¶
Compute h_next(x, h_prev) and log(P(x_next | h_next)). We’ll call it repeatedly to produce the whole sequence.
- Parameters
- xtorch.LongTensor, shape(batch_size)
Batch of character ids.
- h_prevtorch.FloatTensor, shape(batch_size, num_units)
Previous rnn hidden states.
- Returns
- h_nexttorch.FloatTensor, shape(batch_size, num_units)
Next rnn hidden states.
- x_next_probatorch.FloatTensor, shape(batch_size, num_tokens)
Predicted probabilities for the next token.
- initial_state(batch_size)¶
Returns rnn state before it processes first input (aka h_0).