Transformers 庫中常見的用例

本章介紹使用Transformers庫時最常見的用例。可用的模型允許許多不同的配置,並且在用例中具有很強的通用性。這裡介紹了最簡單的方法,展示了諸如問答、序列分類、命名實體識別等任務的用法。

這些示例利用Auto Model,這些類將根據給定的checkpoint實例化模型,並自動選擇正確的模型體系結構。有關詳細信息,請查看:AutoModel文檔。請隨意修改代碼,使其更具體,並使其適應你的特定用例。

  • 為了使模型能夠在任務上良好地執行,必須從與該任務對應的checkpoint加載模型。這些checkpoint通常是在大量數據上預先訓練的,並針對特定任務進行微調。這意味著:並非所有模型都針對所有任務進行了微調。如果要對特定任務的模型進行微調,可以利用examples目錄中的run\\$task.py腳本。
  • 微調模型是在特定的數據集上微調的。此數據集可能與你的用例和域重疊,也可能不重疊。如前所述,你可以利用示例腳本來微調模型,也可以創建自己的訓練腳本。

為了對任務進行推理,庫提供了幾種機制:

  • 管道是非常易於使用的抽象,只需要兩行代碼。
  • 直接將模型與Tokenizer(PyTorch/TensorFlow)結合使用來使用模型的完整推理。這種機制稍微複雜,但是更強大。

這裡展示了兩種方法。

請注意,這裡介紹的所有任務都利用了在預訓練模型針對特定任務進行微調後的模型。加載未針對特定任務進行微調的checkpoint時,將只加載transformer層,而不會加載用於該任務的附加層,從而隨機初始化該附加層的權重。這將產生隨機輸出。

序列分類

序列分類是根據已經給定的類別然後對序列進行分類的任務。序列分類的一個例子是GLUE數據集,它就是完全基於該任務的。如果你想在GLUE序列分類任務上微調模型,可以利用run_GLUE.py或run_tf_GLUE.py腳本。

下面是一個使用管道進行情緒分析的例子:識別該序列是積極的還是消極的。它利用sst2上的微調模型,這是一個GLUE任務。

<code>from transformers import pipeline

nlp = pipeline("sentiment-analysis")

print(nlp("I hate you"))
print(nlp("I love you"))/<code>

這將返回一個標籤(“積極”或“消極”)和一個分數,如下所示:

<code>[{'label': 'NEGATIVE', 'score': 0.9991129}]
[{'label': 'POSITIVE', 'score': 0.99986565}]/<code>

下面是一個使用模型進行序列分類的示例,以確定兩個序列是否是彼此的解釋。該過程如下:

  • 從checkpoint名稱實例化一個tokenizer和一個模型。該模型被識別為一個BERT模型,並用存儲在checkpoint中的權重加載它。
  • 從這兩句話中構建一個序列,使用正確的特定於模型的分隔符標記類型id和注意力掩碼(encode()和encode_plus()處理這個問題)
  • 將這個序列傳遞到模型中,以便將其分類到兩個可用的類中的一個:0(不是解釋)和1(是解釋)
  • 計算結果的softmax獲取類的概率
  • 打印結果

Pytorch代碼

<code>from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

tokenizer = AutoTokenizer.from_pretrained("bert-base-cased-finetuned-mrpc")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mrpc")

classes = ["not paraphrase", "is paraphrase"]

sequence_0 = "The company HuggingFace is based in New York City"
sequence_1 = "Apples are especially bad for your health"
sequence_2 = "HuggingFace's headquarters are situated in Manhattan"

paraphrase = tokenizer.encode_plus(sequence_0, sequence_2, return_tensors="pt")
not_paraphrase = tokenizer.encode_plus(sequence_0, sequence_1, return_tensors="pt")

paraphrase_classification_logits = model(**paraphrase)[0]
not_paraphrase_classification_logits = model(**not_paraphrase)[0]

paraphrase_results = torch.softmax(paraphrase_classification_logits, dim=1).tolist()[0]
not_paraphrase_results = torch.softmax(not_paraphrase_classification_logits, dim=1).tolist()[0]

print("Should be paraphrase")

for i in range(len(classes)):
print(f"{classes[i]}: {round(paraphrase_results[i] * 100)}%")

print("\\nShould not be paraphrase")
for i in range(len(classes)):
print(f"{classes[i]}: {round(not_paraphrase_results[i] * 100)}%")/<code>

TensorFlow代碼

<code>from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
import tensorflow as tf

tokenizer = AutoTokenizer.from_pretrained("bert-base-cased-finetuned-mrpc")
model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mrpc")

classes = ["not paraphrase", "is paraphrase"]

sequence_0 = "The company HuggingFace is based in New York City"
sequence_1 = "Apples are especially bad for your health"
sequence_2 = "HuggingFace's headquarters are situated in Manhattan"

paraphrase = tokenizer.encode_plus(sequence_0, sequence_2, return_tensors="tf")
not_paraphrase = tokenizer.encode_plus(sequence_0, sequence_1, return_tensors="tf")

paraphrase_classification_logits = model(paraphrase)[0]
not_paraphrase_classification_logits = model(not_paraphrase)[0]

paraphrase_results = tf.nn.softmax(paraphrase_classification_logits, axis=1).numpy()[0]
not_paraphrase_results = tf.nn.softmax(not_paraphrase_classification_logits, axis=1).numpy()[0]

print("Should be paraphrase")
for i in range(len(classes)):
print(f"{classes[i]}: {round(paraphrase_results[i] * 100)}%")

print("\\nShould not be paraphrase")
for i in range(len(classes)):
print(f"{classes[i]}: {round(not_paraphrase_results[i] * 100)}%")/<code>

這將輸出以下結果:

<code>Should be paraphrase
not paraphrase: 10%
is paraphrase: 90%

Should not be paraphrase
not paraphrase: 94%
is paraphrase: 6%/<code>

抽取式問答

抽取式問答是從給定問題的文本中抽取答案的任務。問答數據集的一個例子是SQuAD數據集,它完全基於該任務。如果你想在團隊任務中微調模型,可以利用run_SQuAD.py。

下面是一個使用管道進行問答的示例:從給定問題的文本中提取答案。它利用了一個小隊的微調模型。

<code>from transformers import pipeline

nlp = pipeline("question-answering")

context = r"""
Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
a model on a SQuAD task, you may leverage the `run_squad.py`.
"""

print(nlp(question="What is extractive question answering?", context=context))
print(nlp(question="What is a good example of a question answering dataset?", context=context))/<code>

這將返回從文本中提取的答案,一個置信度,以及“開始”和“結束”值,這些值是提取的答案在文本中的位置。

<code>{'score': 0.622232091629833, 'start': 34, 'end': 96, 'answer': 'the task of extracting an answer from a text given a question.'}
{'score': 0.5115299158662765, 'start': 147, 'end': 161, 'answer': 'SQuAD dataset,'}/<code>

下面是一個使用模型和Tokenizer回答問題的示例。該過程如下:

  • 從checkpoint名稱實例化一個tokenizer和一個模型。該模型被識別為一個BERT模型,並用存儲在checkpoint中的權重加載它。
  • 定義一段文本和幾個問題。
  • 遍歷問題並根據文本和當前問題構建一個序列,使用正確的模型特定分隔符標記類型id和注意力掩碼將此序列傳遞到模型中。這將輸出整個序列標記(問題和文本)的開始位置和結束位置的一系列分數。
  • 計算結果的softmax以獲得從標記的開始位置和停止位置對應的概率
  • 將這些標記轉換為字符串。
  • 打印結果

Pytorch代碼

<code>from transformers import AutoTokenizer, AutoModelForQuestionAnswering
import torch

tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
model = AutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")

text = r"""
Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose
architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural
Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between
TensorFlow 2.0 and PyTorch.
"""

questions = [
"How many pretrained models are available in Transformers?",
"What does Transformers provide?",
"Transformers provides interoperability between which frameworks?",
]

for question in questions:
inputs = tokenizer.encode_plus(question, text, add_special_tokens=True, return_tensors="pt")
input_ids = inputs["input_ids"].tolist()[0]

text_tokens = tokenizer.convert_ids_to_tokens(input_ids)
answer_start_scores, answer_end_scores = model(**inputs)

answer_start = torch.argmax(
answer_start_scores
) # Get the most likely beginning of answer with the argmax of the score
answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score

answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))

print(f"Question: {question}")
print(f"Answer: {answer}\\n")/<code>

TensorFlow代碼

<code>from transformers import AutoTokenizer, TFAutoModelForQuestionAnswering
import tensorflow as tf

tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
model = TFAutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")

text = r"""
Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose
architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural
Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between
TensorFlow 2.0 and PyTorch.
"""

questions = [
"How many pretrained models are available in Transformers?",
"What does Transformers provide?",
"Transformers provides interoperability between which frameworks?",
]

for question in questions:
inputs = tokenizer.encode_plus(question, text, add_special_tokens=True, return_tensors="tf")
input_ids = inputs["input_ids"].numpy()[0]

text_tokens = tokenizer.convert_ids_to_tokens(input_ids)
answer_start_scores, answer_end_scores = model(inputs)

answer_start = tf.argmax(
answer_start_scores, axis=1
).numpy()[0] # Get the most likely beginning of answer with the argmax of the score
answer_end = (
tf.argmax(answer_end_scores, axis=1) + 1
).numpy()[0] # Get the most likely end of answer with the argmax of the score
answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))

print(f"Question: {question}")

print(f"Answer: {answer}\\n")/<code>

這將輸出預測答案後的問題:

<code>Question: How many pretrained models are available in Transformers?
Answer: over 32 +

Question: What does Transformers provide?
Answer: general - purpose architectures

Question: Transformers provides interoperability between which frameworks?
Answer: tensorflow 2 . 0 and pytorch/<code>

語言建模

語言建模是將一個模型與一個特定領域的語料庫相匹配的任務。所有流行的基於transformer的模型都是使用語言建模的變體來訓練的,例如掩碼語言建模的BERT、因果語言建模的GPT-2。

語言建模在預訓練之外也很有用,例如將模型分佈轉換為特定領域:使用在非常大的語料庫上訓練的語言模型,然後將其微調到新聞數據集或科學論文上,例如LysandreJik/arxiv nlp(https://huggingface.co/lysandre/arxiv-nlp)。

掩碼語言建模

掩碼語言建模是用掩碼標記對序列中的標記進行掩碼,並提示模型用適當的標記填充該掩碼的任務。這允許模型同時處理右上下文(掩碼右側的標記)和左上下文(掩碼左側的標記)。這樣的訓練為需要雙向背景的下游任務(如SQuAD)奠定了堅實的基礎。

下面是使用管道來替換序列中的掩碼的示例:

<code>from transformers import pipeline

nlp = pipeline("fill-mask")
print(nlp(f"HuggingFace is creating a {nlp.tokenizer.mask_token} that the community uses to solve NLP tasks."))/<code>

這將在Tokenizer詞彙表中輸出填充了掩碼的序列、置信度得分以及標記id:

<code>[
{'sequence': ' HuggingFace is creating a tool that the community uses to solve NLP tasks.', 'score': 0.15627853572368622, 'token': 3944},
{'sequence': ' HuggingFace is creating a framework that the community uses to solve NLP tasks.', 'score': 0.11690319329500198, 'token': 7208},
{'sequence': ' HuggingFace is creating a library that the community uses to solve NLP tasks.', 'score': 0.058063216507434845, 'token': 5560},
{'sequence': ' HuggingFace is creating a database that the community uses to solve NLP tasks.', 'score': 0.04211743175983429, 'token': 8503},
{'sequence': ' HuggingFace is creating a prototype that the community uses to solve NLP tasks.', 'score': 0.024718601256608963, 'token': 17715}
]/<code>

下面是一個使用模型和Tokenizer進行掩碼語言建模的示例。該過程如下:

  • 從checkpoint名稱實例化一個tokenizer和一個模型。該模型被識別為一個DistilBERT模型,並用存儲在checkpoint中的權重加載它。
  • 定義一個帶掩碼標記的序列,不使用單詞而是選擇tokenizer.mask_token進行放置(進行掩碼)。
  • 將該序列編碼為id,並在該id列表中找到掩碼標記的位置。
  • 在掩碼標記的索引處檢索預測:此張量與詞彙表的大小相同,值是每個標記的分數。模型對他認為在這種情況下可能出現的標記會給出更高的分數。
  • 使用PyTorch topk或TensorFlow top_k方法檢索前5個標記。
  • 用預測的標記替換掩碼標記並打印結果

Pytorch代碼

<code>from transformers import AutoModelWithLMHead, AutoTokenizer
import torch

tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased")
model = AutoModelWithLMHead.from_pretrained("distilbert-base-cased")

sequence = f"Distilled models are smaller than the models they mimic. Using them instead of the large versions would help {tokenizer.mask_token} our carbon footprint."

input = tokenizer.encode(sequence, return_tensors="pt")
mask_token_index = torch.where(input == tokenizer.mask_token_id)[1]

token_logits = model(input)[0]
mask_token_logits = token_logits[0, mask_token_index, :]

top_5_tokens = torch.topk(mask_token_logits, 5, dim=1).indices[0].tolist()

for token in top_5_tokens:
print(sequence.replace(tokenizer.mask_token, tokenizer.decode([token])))/<code>

TensorFlow代碼

<code>from transformers import TFAutoModelWithLMHead, AutoTokenizer
import tensorflow as tf

tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased")
model = TFAutoModelWithLMHead.from_pretrained("distilbert-base-cased")

sequence = f"Distilled models are smaller than the models they mimic. Using them instead of the large versions would help {tokenizer.mask_token} our carbon footprint."

input = tokenizer.encode(sequence, return_tensors="tf")
mask_token_index = tf.where(input == tokenizer.mask_token_id)[0, 1]


token_logits = model(input)[0]
mask_token_logits = token_logits[0, mask_token_index, :]

top_5_tokens = tf.math.top_k(mask_token_logits, 5).indices.numpy()

for token in top_5_tokens:
print(sequence.replace(tokenizer.mask_token, tokenizer.decode([token])))/<code>

這將打印五個序列,其中前五個標記由模型預測:

<code>Distilled models are smaller than the models they mimic. Using them instead of the large versions would help reduce our carbon footprint.
Distilled models are smaller than the models they mimic. Using them instead of the large versions would help increase our carbon footprint.
Distilled models are smaller than the models they mimic. Using them instead of the large versions would help decrease our carbon footprint.
Distilled models are smaller than the models they mimic. Using them instead of the large versions would help offset our carbon footprint.
Distilled models are smaller than the models they mimic. Using them instead of the large versions would help improve our carbon footprint./<code>

因果語言建模

因果語言建模是根據一系列的標記來預測標記的任務。在這種情況下,模型只關注左邊的上下文(掩碼左邊的標記)。這樣的訓練對於生成任務來說是有作用的。

目前還沒有進行因果語言建模/生成的管道。 下面是一個使用Tokenizer和模型的示例。利用generate()方法按照PyTorch中的初始序列生成標記,並在TensorFlow中創建一個簡單的循環。

Pytorch代碼

<code>from transformers import AutoModelWithLMHead, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("gpt2")

sequence = f"Hugging Face is based in DUMBO, New York City, and is"

input = tokenizer.encode(sequence, return_tensors="pt")
generated = model.generate(input, max_length=50)


resulting_string = tokenizer.decode(generated.tolist()[0])
print(resulting_string)/<code>

TensorFlow代碼

<code>from transformers import TFAutoModelWithLMHead, AutoTokenizer
import tensorflow as tf

tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = TFAutoModelWithLMHead.from_pretrained("gpt2")

sequence = f"Hugging Face is based in DUMBO, New York City, and is"
generated = tokenizer.encode(sequence)

for i in range(50):
predictions = model(tf.constant([generated]))[0]
token = tf.argmax(predictions[0], axis=1)[-1].numpy()
generated += [token]

resulting_string = tokenizer.decode(generated)
print(resulting_string)/<code>

這將從原始序列輸出(希望)的對應字符串,使用top_p/tok_k分佈獲取generate()採樣的結果:

<code>Hugging Face is based in DUMBO, New York City, and is a live-action TV series based on the novel by John
Carpenter, and its producers, David Kustlin and Steve Pichar. The film is directed by!/<code>

命名實體識別

命名實體識別(NER)是根據類別對標記進行分類的任務,例如將標記標識為個人、組織或位置。命名實體識別數據集的一個例子是CoNLL-2003數據集,它完全基於該任務。如果你想對NER任務的模型進行微調,可以利用ner/run_ner.py(PyTorch)、ner/run_pl_ner.py(利用PyTorch lightning)或ner/run_tf_ner.py(TensorFlow)腳本。

下面是一個使用管道進行命名實體識別的示例,試圖將標記標識為屬於9個類之一:

  • O, 不是命名實體
  • B-MIS, 一個雜項實體的開頭
  • I-MIS, 雜項實體
  • B-PER, 一個人名的開頭
  • I-PER, 人名
  • B-ORG, 一個組織的開頭
  • I-ORG, 組織
  • B-LOC, 一個地點的開頭
  • I-LOC, 地點

它利用CoNLL-2003上一個經過微調的模型,由dbmdz的@stefan-it進行了微調。

<code>from transformers import pipeline

nlp = pipeline("ner")

sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \\
"close to the Manhattan Bridge which is visible from the window."

print(nlp(sequence))/<code>

這將輸出上面定義的9個類中標識為實體的所有單詞的列表。以下是預期結果:

<code>[
{'word': 'Hu', 'score': 0.9995632767677307, 'entity': 'I-ORG'},
{'word': '##gging', 'score': 0.9915938973426819, 'entity': 'I-ORG'},
{'word': 'Face', 'score': 0.9982671737670898, 'entity': 'I-ORG'},
{'word': 'Inc', 'score': 0.9994403719902039, 'entity': 'I-ORG'},
{'word': 'New', 'score': 0.9994346499443054, 'entity': 'I-LOC'},
{'word': 'York', 'score': 0.9993270635604858, 'entity': 'I-LOC'},

{'word': 'City', 'score': 0.9993864893913269, 'entity': 'I-LOC'},
{'word': 'D', 'score': 0.9825621843338013, 'entity': 'I-LOC'},
{'word': '##UM', 'score': 0.936983048915863, 'entity': 'I-LOC'},
{'word': '##BO', 'score': 0.8987102508544922, 'entity': 'I-LOC'},
{'word': 'Manhattan', 'score': 0.9758241176605225, 'entity': 'I-LOC'},
{'word': 'Bridge', 'score': 0.990249514579773, 'entity': 'I-LOC'}
]/<code>

注意“Hugging Face”是如何被確定為一個組織,“New York City”,“DUMBO”和“Manhattan Bridge”是如何被確定為地點的。

下面是一個使用模型和Tokenizer進行命名實體識別的示例。 該過程如下:

  • 從checkpoint名稱實例化一個tokenizer和一個模型。該模型被識別為一個BERT模型,並用存儲在checkpoint中的權重加載它。
  • 定義用於訓練模型的標籤列表。
  • 定義一個包含已知實體的序列,例如“Hugging Face”作為一個組織,“New York City”作為一個位置。
  • 將單詞拆分為標記,以便它們可以映射到預測。我們使用一個小技巧,首先對序列進行完全的編碼和解碼,這樣就留下了一個包含特殊標記的字符串。
  • 將該序列編碼為id(自動添加特殊標記)。
  • 通過將輸入傳遞到模型並獲得第一個輸出來檢索預測。這將導致每個標記在9個可能的類上分佈。我們使用argmax來檢索每個標記最可能的類。
  • 將每個標記及其預測到一起並打印出來。

Pytorch代碼

<code>from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch

model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")

label_list = [
"O", # 不是命名實體
"B-MISC", # 一個雜項實體的開頭
"I-MISC", # 雜項
"B-PER", # 一個人名的開頭
"I-PER", # 人名
"B-ORG", # 一個組織的開頭
"I-ORG", # 組織
"B-LOC", # 一個地點的開頭
"I-LOC" # 地點
]

sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \\
"close to the Manhattan Bridge."

# Bit of a hack to get the tokens with the special tokens
tokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(sequence)))
inputs = tokenizer.encode(sequence, return_tensors="pt")

outputs = model(inputs)[0]
predictions = torch.argmax(outputs, dim=2)

print([(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].tolist())])/<code>

TensorFlow代碼

<code>from transformers import TFAutoModelForTokenClassification, AutoTokenizer
import tensorflow as tf

model = TFAutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")

label_list = [
"O", # 不是命名實體
"B-MISC", # 一個雜項實體的開頭
"I-MISC", # 雜項
"B-PER", # 一個人名的開頭
"I-PER", # 人名
"B-ORG", # 一個組織的開頭
"I-ORG", # 組織
"B-LOC", # 一個地點的開頭
"I-LOC" # 地點
]

sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \\
"close to the Manhattan Bridge."

#用特殊的標記來獲取標記的一點技巧
tokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(sequence)))
inputs = tokenizer.encode(sequence, return_tensors="tf")

outputs = model(inputs)[0]
predictions = tf.argmax(outputs, axis=2)

print([(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].numpy())])/<code>

這將輸出映射到其預測的每個標記的列表。與管道不同的是,這裡每個標記都有一個預測,因為我們沒有刪除“O”類,這意味著在該標記上找不到特定的實體。以下數組應為輸出:

<code>[('[CLS]', 'O'), ('Hu', 'I-ORG'), ('##gging', 'I-ORG'), ('Face', 'I-ORG'), ('Inc', 'I-ORG'), ('.', 'O'), ('is', 'O'), ('a', 'O'), ('company', 'O'), ('based', 'O'), ('in', 'O'), ('New', 'I-LOC'), ('York', 'I-LOC'), ('City', 'I-LOC'), ('.', 'O'), ('Its', 'O'), ('headquarters', 'O'), ('are', 'O'), ('in', 'O'), ('D', 'I-LOC'), ('##UM', 'I-LOC'), ('##BO', 'I-LOC'), (',', 'O'), ('therefore', 'O'), ('very', 'O'), ('##c', 'O'), ('##lose', 'O'), ('to', 'O'), ('the', 'O'), ('Manhattan', 'I-LOC'), ('Bridge', 'I-LOC'), ('.', 'O'), ('[SEP]', 'O')]/<code>


分享到:


相關文章: