目录

torchtext.data.functional

generate_sp_model

torchtext.data.functional.generate_sp_model(filename, vocab_size=20000, model_type='unigram', model_prefix='m_user')[source]

训练一个 SentencePiece 分词器。

Parameters:
  • 文件名 – 用于训练SentencePiece模型的数据文件。

  • vocab_size – 词汇表的大小(默认值:20,000)。

  • model_type – SentencePiece模型的类型,包括unigram、bpe、char、word。

  • model_prefix – 模型和词汇表保存文件的前缀。

Outputs:
The model and vocab are saved in two separate files with

model_prefix.

示例

>>> from torchtext.data.functional import generate_sp_model
>>> generate_sp_model('test.csv', vocab_size=23456, model_prefix='spm_user')

load_sp_model

torchtext.data.functional.load_sp_model(spm)[source]

加载文件的 SentencePiece 模型。

Parameters:

spm – 文件路径或保存 SentencePiece 模型的文件对象。

Outputs:

输出一个 SentencePiece 模型。

示例

>>> from torchtext.data.functional import load_sp_model
>>> sp_model = load_sp_model("m_user.model")
>>> sp_model = load_sp_model(open("m_user.model", 'rb'))

sentencepiece_numericalizer

torchtext.data.functional.sentencepiece_numericalizer(sp_model)[source]
A sentencepiece model to numericalize a text sentence into

一个生成器,用于遍历 ID。

Parameters:

sp_model – 一个 SentencePiece 模型。

Outputs:
output: a generator with the input of text sentence and the output of the

基于 SentencePiece 模型的对应 ID。

示例

>>> from torchtext.data.functional import sentencepiece_numericalizer
>>> sp_id_generator = sentencepiece_numericalizer(sp_model)
>>> list_a = ["sentencepiece encode as pieces", "examples to   try!"]
>>> list(sp_id_generator(list_a))
    [[9858, 9249, 1629, 1305, 1809, 53, 842],
     [2347, 13, 9, 150, 37]]

sentencepiece_tokenizer

torchtext.data.functional.sentencepiece_tokenizer(sp_model)[source]
A sentencepiece model to tokenize a text sentence into

一个 token 的生成器。

Parameters:

sp_model – 一个 SentencePiece 模型。

Outputs:
output: a generator with the input of text sentence and the output of the

基于 SentencePiece 模型的对应令牌。

示例

>>> from torchtext.data.functional import sentencepiece_tokenizer
>>> sp_tokens_generator = sentencepiece_tokenizer(sp_model)
>>> list_a = ["sentencepiece encode as pieces", "examples to   try!"]
>>> list(sp_tokens_generator(list_a))
    [['_sentence', 'piece', '_en', 'co', 'de', '_as', '_pieces'],
     ['_example', 's', '_to', '_try', '!']]

custom_replace

torchtext.data.functional.custom_replace(replace_pattern)[source]

转换文本字符串的变换。

示例

>>> from torchtext.data.functional import custom_replace
>>> custom_replace_transform = custom_replace([(r'S', 's'), (r'\s+', ' ')])
>>> list_a = ["Sentencepiece encode  aS  pieces", "exampleS to   try!"]
>>> list(custom_replace_transform(list_a))
    ['sentencepiece encode as pieces', 'examples to try!']

simple_space_split

torchtext.data.functional.simple_space_split(iterator)[source]

通过空格拆分文本字符串的转换。

示例

>>> from torchtext.data.functional import simple_space_split
>>> list_a = ["Sentencepiece encode as pieces", "example to try!"]
>>> list(simple_space_split(list_a))
    [['Sentencepiece', 'encode', 'as', 'pieces'], ['example', 'to', 'try!']]

numericalize_tokens_from_iterator

torchtext.data.functional.numericalize_tokens_from_iterator(vocab, iterator, removed_tokens=None)[source]

从一个带有词汇表的标记迭代器中生成一个 id 列表。

Parameters:
  • 词汇表 – 将词转换为ID的词汇表。

  • 迭代器 – 迭代器生成一个标记列表。

  • removed_tokens – 输出数据集中移除的标记 (默认: None)

示例

>>> from torchtext.data.functional import simple_space_split
>>> from torchtext.data.functional import numericalize_tokens_from_iterator
>>> vocab = {'Sentencepiece' : 0, 'encode' : 1, 'as' : 2, 'pieces' : 3}
>>> ids_iter = numericalize_tokens_from_iterator(vocab,
>>>                               simple_space_split(["Sentencepiece as pieces",
>>>                                                   "as pieces"]))
>>> for ids in ids_iter:
>>>     print([num for num in ids])
>>> [0, 2, 3]
>>> [2, 3]

filter_wikipedia_xml

torchtext.data.functional.filter_wikipedia_xml(text_iterator)[source]

根据https://github.com/facebookresearch/fastText/blob/master/wikifil.pl过滤维基百科XML行

Parameters:

text_iterator – 一个生成字符串的迭代器对象。示例包括字符串列表、文本IO、生成器等。

示例

>>> from torchtext.data.functional import filter_wikipedia_xml
>>> from torchtext.datasets import EnWik9
>>> data_iter = EnWik9(split='train')
>>> filter_data_iter = filter_wikipedia_xml(data_iter)
>>> file_name = '.data/EnWik9/enwik9'
>>> filter_data_iter = filter_wikipedia_xml(open(file_name,'r'))

to_map_style_dataset

torchtext.data.functional.to_map_style_dataset(iter_data)[source]

将迭代式数据集转换为映射式数据集。

Parameters:

iter_data – 一个迭代器类型对象。示例包括可迭代数据集、字符串列表、文本IO、生成器等。

示例

>>> from torchtext.datasets import IMDB
>>> from torchtext.data import to_map_style_dataset
>>> train_iter = IMDB(split='train')
>>> train_dataset = to_map_style_dataset(train_iter)
>>> file_name = '.data/EnWik9/enwik9'
>>> data_iter = to_map_style_dataset(open(file_name,'r'))

文档

访问 PyTorch 的全面开发人员文档

查看文档

教程

获取面向初学者和高级开发人员的深入教程

查看教程

资源

查找开发资源并解答您的问题

查看资源