용어집

시퀀스 간 모델

시퀀스 간 모델이 어떻게 입력을 출력 시퀀스로 변환하여 번역, 챗봇, 음성 인식과 같은 AI 작업을 지원하는지 알아보세요.

YOLO 모델을 Ultralytics HUB로 간단히
훈련

자세히 알아보기

Sequence-to-Sequence (Seq2Seq) models are a class of deep learning models designed to transform an input sequence into an output sequence, where the lengths of the input and output sequences can differ. This capability makes them exceptionally powerful for a wide range of tasks, particularly in Natural Language Processing (NLP), but also applicable in other domains like speech recognition and even certain computer vision problems involving sequential data. The core idea, introduced in papers like "Sequence to Sequence Learning with Neural Networks" by Sutskever et al. and "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation" by Cho et al., revolutionized how machines handle variable-length sequence transformations.

인코더-디코더 아키텍처

The fundamental structure of most Seq2Seq models is the encoder-decoder architecture:

  1. Encoder: This part processes the entire input sequence (e.g., a sentence in French) step-by-step, typically using a Recurrent Neural Network (RNN) like LSTM (Long Short-Term Memory) or GRU (Gated Recurrent Unit). Its goal is to compress the information from the input sequence into a fixed-size internal representation, often called a "context vector" or "thought vector." This vector aims to capture the essence or meaning of the input sequence.
  2. Decoder: This part takes the context vector generated by the encoder and produces the output sequence step-by-step (e.g., the translated sentence in English). It's also typically an RNN that generates one element (like a word or character) at each time step, conditioned on the context vector and the elements generated in previous steps.

A significant enhancement to this basic structure was the introduction of the attention mechanism, detailed in Bahdanau et al.'s paper "Neural Machine Translation by Jointly Learning to Align and Translate". Attention allows the decoder to selectively focus on different parts of the input sequence when generating each element of the output sequence, rather than relying solely on the single fixed-size context vector. This dramatically improved performance, especially for long sequences. This concept paved the way for architectures like the Transformer, which relies entirely on attention mechanisms, dispensing with recurrence altogether and becoming the foundation for models like BERT and GPT.

실제 애플리케이션

Seq2Seq models excel in tasks where input and output are sequential but may not have a one-to-one correspondence in length or structure. Key applications include:

  • Machine Translation: Translating text from one language to another (e.g., powering services like Google Translate or DeepL Translator). This was one of the first major successes of Seq2Seq models.
  • Text Summarization: Generating a concise summary from a longer document or article. The input is the long text sequence, and the output is the shorter summary sequence.
  • Conversational AI / Chatbots: Generating responses in a dialogue system. The input is the user's query or statement, and the output is the chatbot's reply. Platforms like Google Dialogflow utilize such technologies.
  • Speech Recognition: Converting spoken audio (a sequence of audio features) into text (a sequence of words).
  • Image Captioning: Generating a textual description (output sequence) for an image (input sequence, often represented as features extracted by a CNN). While the input isn't strictly sequential, the output generation process fits the Seq2Seq paradigm.
  • Code Generation: Generating programming code based on natural language descriptions.

주요 개념 및 고려 사항

Building and training Seq2Seq models involves several important concepts:

  • Embeddings: Input words or tokens are typically converted into dense vector representations before being fed into the encoder.
  • Backpropagation Through Time (BPTT): The standard method for training RNNs by unfolding the network over the sequence length.
  • Handling Long Sequences: Basic RNNs struggle with long dependencies due to issues like the vanishing gradient problem. LSTMs and GRUs were designed to mitigate this, and attention mechanisms further improve performance on long sequences. Transformer models excel here.
  • Evaluation Metrics: Depending on the task, metrics like BLEU (for translation), ROUGE (for summarization), or accuracy/F1-score (for sequence labeling) are used. Ultralytics provides guidance on performance metrics.

Seq2Seq vs. Other Architectures

While Seq2Seq models based on RNNs were groundbreaking, the field has evolved:

  • Standard RNNs: Typically map sequences to sequences of the same length or classify entire sequences, lacking the flexibility of the encoder-decoder structure for variable output lengths.
  • Transformers: Now dominate many NLP tasks previously handled by RNN-based Seq2Seq models. They use self-attention and positional encodings instead of recurrence, allowing for better parallelization and capturing long-range dependencies more effectively. Models like Baidu's RT-DETR, supported by Ultralytics, incorporate Transformer components for object detection.
  • CNNs: Primarily used for grid-like data such as images (e.g., in Ultralytics YOLO models for detection and segmentation), though sometimes adapted for sequence tasks.

While Seq2Seq often refers to the RNN-based encoder-decoder structure, the general principle of mapping input sequences to output sequences using an intermediate representation remains central to many modern architectures, including Transformers used in translation and summarization. Tools like PyTorch and TensorFlow provide building blocks for implementing both traditional and modern sequence models. Managing the training process can be streamlined using platforms like Ultralytics HUB.

모두 보기