textflint.generation_layer.validator.gpt2_perplexity

GPT-2 language model perplexity class

class textflint.generation_layer.validator.gpt2_perplexity.GPT2LMHeadModel(config)[source]

Bases: transformers.models.gpt2.modeling_gpt2.GPT2PreTrainedModel

The GPT2 Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters:
config (GPT2Config): Model configuration class with all the parameters of the model.

Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

parallelize(device_map=None)[source]

This is an experimental feature and is a subject to change at a moment’s notice.

Uses a device map to distribute attention modules of the model across several devices. If no device map is given, it will evenly distribute blocks across all devices.

Args:
device_map (Dict[int, list], optional, defaults to None):

A dictionary that maps attention modules to devices. Note that the embedding module and LMHead are always automatically mapped to the first device (for esoteric reasons). That means that the first device should have fewer attention modules mapped to it than other devices. For reference, the gpt2 models have the following number of attention modules:

  • gpt2: 12

  • gpt2-medium: 24

  • gpt2-large: 36

  • gpt2-xl: 48

Example:

# Here is an example of a device map on a machine with 4 GPUs using gpt2-xl, which has a total of 48 attention modules:
model = GPT2LMHeadModel.from_pretrained('gpt2-xl')
device_map = {0: [0, 1, 2, 3, 4, 5, 6, 7, 8],

              1: [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21],
              2: [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34],
              3: [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]}
model.parallelize(device_map)
deparallelize()[source]

Moves the model to cpu from a model parallel state.

Example:

# On a 4 GPU machine with gpt2-large:
model = GPT2LMHeadModel.from_pretrained('gpt2-large')
device_map = {0: [0, 1, 2, 3, 4, 5, 6, 7],

            1: [8, 9, 10, 11, 12, 13, 14, 15],
            2: [16, 17, 18, 19, 20, 21, 22, 23],
            3: [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35]}
model.parallelize(device_map) # Splits the model across several devices
model.deparallelize() # Put the model back on cpu and cleans memory by calling torch.cuda.empty_cache()
forward(input_ids=None, past_key_values=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]

The GPT2LMHeadModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Args:
input_ids (torch.LongTensor of shape (batch_size, input_ids_length)):

input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary.

If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids.

Indices can be obtained using GPT2Tokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

What are input IDs?

past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers):

Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed.

attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

  • 1 for tokens that are not masked,

  • 0 for tokens that are masked.

What are attention masks?

token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional):

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

  • 0 corresponds to a sentence A token,

  • 1 corresponds to a sentence B token.

What are token type IDs?

position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional):

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

What are position IDs?

head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

  • 1 indicates the head is not masked,

  • 0 indicates the head is masked.

inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional):

Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values).

use_cache (bool, optional):

If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).

output_attentions (bool, optional):

Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

output_hidden_states (bool, optional):

Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

return_dict (bool, optional):

Whether or not to return a ModelOutput instead of a plain tuple.

labels (torch.LongTensor of shape (batch_size, sequence_length), optional):

Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]

Returns:

CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor): A CausalLMOutputWithCrossAttentions (if return_dict=True is passed or when config.return_dict=True) or a tuple of torch.FloatTensor comprising various elements depending on the configuration (GPT2Config) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) – Language modeling loss (for next-token prediction).

  • logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) – Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

  • cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads.

  • past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) – Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True.

    Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

Example:

>>> import torch
>>> from transformers import GPT2Tokenizer, GPT2LMHeadModel

>>> tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
>>> model = GPT2LMHeadModel.from_pretrained('gpt2')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs, labels=inputs["input_ids"])
>>> loss = outputs.loss
>>> logits = outputs.logits
training: bool
class textflint.generation_layer.validator.gpt2_perplexity.GPT2Tokenizer(vocab_file, merges_file, errors='replace', unk_token='<|endoftext|>', bos_token='<|endoftext|>', eos_token='<|endoftext|>', add_prefix_space=False, **kwargs)[source]

Bases: transformers.tokenization_utils.PreTrainedTokenizer

Construct a GPT-2 tokenizer. Based on byte-level Byte-Pair-Encoding.

This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not:

>>> from transformers import GPT2Tokenizer
>>> tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
>>> tokenizer("Hello world")['input_ids']
[15496, 995]
>>> tokenizer(" Hello world")['input_ids']
[18435, 995]

You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.

Note

When used with is_split_into_words=True, this tokenizer will add a space before each word (even the first one).

This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

Args:
vocab_file (str):

Path to the vocabulary file.

merges_file (str):

Path to the merges file.

errors (str, optional, defaults to "replace"):

Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information.

unk_token (str, optional, defaults to <|endoftext|>):

The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

bos_token (str, optional, defaults to <|endoftext|>):

The beginning of sequence token.

eos_token (str, optional, defaults to <|endoftext|>):

The end of sequence token.

add_prefix_space (bool, optional, defaults to False):

Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (GPT2 tokenizer detect beginning of words by the preceding space).

vocab_files_names: Dict[str, str] = {'merges_file': 'merges.txt', 'vocab_file': 'vocab.json'}
pretrained_vocab_files_map: Dict[str, Dict[str, str]] = {'merges_file': {'distilgpt2': 'https://huggingface.co/distilgpt2/resolve/main/merges.txt', 'gpt2': 'https://huggingface.co/gpt2/resolve/main/merges.txt', 'gpt2-large': 'https://huggingface.co/gpt2-large/resolve/main/merges.txt', 'gpt2-medium': 'https://huggingface.co/gpt2-medium/resolve/main/merges.txt', 'gpt2-xl': 'https://huggingface.co/gpt2-xl/resolve/main/merges.txt'}, 'vocab_file': {'distilgpt2': 'https://huggingface.co/distilgpt2/resolve/main/vocab.json', 'gpt2': 'https://huggingface.co/gpt2/resolve/main/vocab.json', 'gpt2-large': 'https://huggingface.co/gpt2-large/resolve/main/vocab.json', 'gpt2-medium': 'https://huggingface.co/gpt2-medium/resolve/main/vocab.json', 'gpt2-xl': 'https://huggingface.co/gpt2-xl/resolve/main/vocab.json'}}
max_model_input_sizes: Dict[str, Optional[int]] = {'distilgpt2': 1024, 'gpt2': 1024, 'gpt2-large': 1024, 'gpt2-medium': 1024, 'gpt2-xl': 1024}
model_input_names: List[str] = ['input_ids', 'attention_mask']
convert_tokens_to_string(tokens)[source]

Converts a sequence of tokens (string) in a single string.

class textflint.generation_layer.validator.gpt2_perplexity.Validator(origin_dataset, trans_dataset, fields, need_tokens=False)[source]

Bases: abc.ABC

An abstract class that computes the semantic similarity score between

original text and adversarial texts

Parameters
  • origin_dataset (dataset) – the dataset of origin sample

  • trans_dataset (dataset) – the dataset of translate sample

  • fields (str|list) – the name of the origin field need compare.

  • need_tokens (bool) – if we need tokenize the sentence

abstract validate(transformed_text, reference_text)[source]

Calculate the score

Parameters
  • transformed_text (str) – transformed sentence

  • reference_text (str) – origin sentence

Return float

the score of two sentence

check_data()[source]

Check whether the input data is legal

property score

Calculate the score of the deformed sentence

Return list

a list of translate sentence score