YAML Metadata Warning: The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

Summarization

이 λͺ¨λΈμ€ ν•œκ΅­μ–΄ 기사 μš”μ•½λ¬Έμ„ μž…λ ₯으둜 λ„£μœΌλ©΄ 그에 μ–΄μšΈλ¦¬λŠ” 제λͺ©μ„ μƒμ„±ν•΄μ£ΌλŠ” λͺ¨λΈμž…λ‹ˆλ‹€.
기쑴의 'kobart-summarization' λͺ¨λΈμ„ μ•½ 1,000개의 ν•œκ΅­μ–΄ 기사-제λͺ© 쌍으둜 νŒŒμΈνŠœλ‹ν•˜μ˜€μŠ΅λ‹ˆλ‹€.

How to use

pip install torch transformers
import torch
from transformers import PreTrainedTokenizerFast
from transformers import BartForConditionalGeneration

model = BartForConditionalGeneration.from_pretrained('yebini/kobart-headline-gen')
tokenizer = PreTrainedTokenizerFast.from_pretrained('yebini/kobart-headline-gen')

text =  'μ„œμšΈ μ€‘λΆ€κ²½μ°°μ„œλŠ” λ§ˆμ•½μ„ νˆ¬μ•½ν•œ μƒνƒœλ‘œ μš΄μ „μ„ ν•˜λ‹€ ꡐ톡사고λ₯Ό λ‚΄κ³  λ„μ£Όν•œ 혐의둜 40λŒ€ 남성에 λŒ€ν•΄ κ΅¬μ†μ˜μž₯을 μ‹ μ²­ν–ˆμŠ΅λ‹ˆλ‹€. 이 남성은 μ§€λ‚œ 5일 μ˜€μ „ 6μ‹œ 15λΆ„μ―€ μ„œμšΈ 쀑ꡬ κ΄‘ν¬λ™μ˜ ν•œ λ„λ‘œμ—μ„œ λ§ˆμ•½μ„ νˆ¬μ•½ν•œ μƒνƒœλ‘œ κ³ κΈ‰ μ™Έμ œμ°¨λ₯Ό λͺ°λ‹€ μ‹ ν˜Έ λŒ€κΈ° 쀑인 μ°¨λŸ‰ 2λŒ€λ₯Ό 듀이받은 λ’€ λ„μ£Όν•œ 혐의λ₯Ό λ°›κ³  μžˆμŠ΅λ‹ˆλ‹€. 이 남성은 사고 직후 2λ°± λ―Έν„°κ°€λŸ‰ 달아났닀 λ‹€μ‹œ ν˜„μž₯으둜 λŒμ•„μ™€ 경찰에 μžμˆ˜ν–ˆμœΌλ©°, λ§ˆμ•½ 간이 μ‹œμ•½ 검사에선 λŒ€λ§ˆ μ–‘μ„± λ°˜μ‘μ΄ λ‚˜μ˜¨ 걸둜 νŒŒμ•…λμŠ΅λ‹ˆλ‹€. 남성이 듀이받은 μ°¨λŸ‰μ— 타고 있던 μš΄μ „μžλ“€μ€ 크게 λ‹€μΉ˜μ§€λŠ” μ•Šμ€ 걸둜 μ „ν•΄μ‘ŒμœΌλ©°, 경찰은 λ‚¨μ„±μ˜ μ†Œλ³€κ³Ό λͺ¨λ°œμ„ κ΅­λ¦½κ³Όν•™μˆ˜μ‚¬μ—°κ΅¬μ›μ— 보내 μ •λ°€ 감정을 μ˜λ’°ν•˜κ³  μžμ„Έν•œ κ²½μœ„λ₯Ό μ‘°μ‚¬ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€.'

inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True)
summary_ids = model.generate(inputs.input_ids, max_length=64, num_beams=5, repetition_penalty=1.2, length_penalty=0.8, early_stopping=True)

title = tokenizer.decode(summary_ids[0], skip_special_tokens=True)

print("제λͺ©:", title)
Downloads last month
1
Safetensors
Model size
0.1B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for yebini/kobart-headline-gen

Finetuned
(4)
this model