YAML Metadata Warning: The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
Summarization
μ΄ λͺ¨λΈμ νκ΅μ΄ κΈ°μ¬ μμ½λ¬Έμ μ
λ ₯μΌλ‘ λ£μΌλ©΄ κ·Έμ μ΄μΈλ¦¬λ μ λͺ©μ μμ±ν΄μ£Όλ λͺ¨λΈμ
λλ€.
κΈ°μ‘΄μ 'kobart-summarization' λͺ¨λΈμ μ½ 1,000κ°μ νκ΅μ΄ κΈ°μ¬-μ λͺ© μμΌλ‘ νμΈνλνμμ΅λλ€.
How to use
pip install torch transformers
import torch
from transformers import PreTrainedTokenizerFast
from transformers import BartForConditionalGeneration
model = BartForConditionalGeneration.from_pretrained('yebini/kobart-headline-gen')
tokenizer = PreTrainedTokenizerFast.from_pretrained('yebini/kobart-headline-gen')
text = 'μμΈ μ€λΆκ²½μ°°μλ λ§μ½μ ν¬μ½ν μνλ‘ μ΄μ μ νλ€ κ΅ν΅μ¬κ³ λ₯Ό λ΄κ³ λμ£Όν νμλ‘ 40λ λ¨μ±μ λν΄ κ΅¬μμμ₯μ μ μ²νμ΅λλ€. μ΄ λ¨μ±μ μ§λ 5μΌ μ€μ 6μ 15λΆμ―€ μμΈ μ€κ΅¬ κ΄ν¬λμ ν λλ‘μμ λ§μ½μ ν¬μ½ν μνλ‘ κ³ κΈ μΈμ μ°¨λ₯Ό λͺ°λ€ μ νΈ λκΈ° μ€μΈ μ°¨λ 2λλ₯Ό λ€μ΄λ°μ λ€ λμ£Όν νμλ₯Ό λ°κ³ μμ΅λλ€. μ΄ λ¨μ±μ μ¬κ³ μ§ν 2λ°± λ―Έν°κ°λ λ¬μλ¬λ€ λ€μ νμ₯μΌλ‘ λμμ κ²½μ°°μ μμνμΌλ©°, λ§μ½ κ°μ΄ μμ½ κ²μ¬μμ λλ§ μμ± λ°μμ΄ λμ¨ κ±Έλ‘ νμ
λμ΅λλ€. λ¨μ±μ΄ λ€μ΄λ°μ μ°¨λμ νκ³ μλ μ΄μ μλ€μ ν¬κ² λ€μΉμ§λ μμ κ±Έλ‘ μ ν΄μ‘μΌλ©°, κ²½μ°°μ λ¨μ±μ μλ³κ³Ό λͺ¨λ°μ κ΅λ¦½κ³Όνμμ¬μ°κ΅¬μμ λ³΄λ΄ μ λ° κ°μ μ μλ’°νκ³ μμΈν κ²½μλ₯Ό μ‘°μ¬νκ³ μμ΅λλ€.'
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True)
summary_ids = model.generate(inputs.input_ids, max_length=64, num_beams=5, repetition_penalty=1.2, length_penalty=0.8, early_stopping=True)
title = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print("μ λͺ©:", title)
- Downloads last month
- 1
Model tree for yebini/kobart-headline-gen
Base model
gogamza/kobart-summarization