embed
Author: Heli Qi Affiliation: NAIST Date: 2022.07
EmbedPrenet
Bases: Module
Create new embeddings for the vocabulary.
Use scaling for the Transformer.
Source code in speechain/module/prenet/embed.py
forward(text)
Perform lookup for input x
in the embedding table.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
text
|
Tensor
|
(batch, seq_len) |
required |
Returns:
Type | Description |
---|---|
embedded representation for |
Source code in speechain/module/prenet/embed.py
module_init(embedding_dim, vocab_size, scale=False, padding_idx=0)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
embedding_dim
|
int The dimension of token embedding vectors |
required | |
scale
|
bool
|
bool Controls whether the values of embedding vectors are scaled according to the embedding dimension. Useful for the Transformer model. |
False
|
vocab_size
|
int The number of tokens in the dictionary. |
required | |
padding_idx
|
int
|
int The token index used for padding the tail areas of the short sentences. |
0
|