مسرد المصطلحات

التعلم الذاتي الخاضع للإشراف الذاتي

اكتشف كيف يستفيد التعلُّم الخاضع للإشراف الذاتي من البيانات غير المُسمَّاة من أجل التدريب الفعّال، مما يُحدث تحولاً في الذكاء الاصطناعي في مجال الرؤية الحاسوبية والبرمجة اللغوية العصبية وغيرها.

تدريب YOLO النماذج
ببساطة مع Ultralytics HUB

التعرف على المزيد

Self-Supervised Learning (SSL) is a machine learning (ML) approach that enables models to learn from vast amounts of unlabeled data. Unlike supervised learning, which heavily depends on meticulously labeled data, SSL ingeniously creates its own supervisory signals directly from the input data itself. This makes it exceptionally valuable in fields like computer vision (CV) and natural language processing (NLP), where unlabeled data is abundant, but the cost and effort of manual labeling (data annotation) can be prohibitive.

كيف يعمل التعلّم الخاضع للإشراف الذاتي

The core mechanism behind SSL involves designing a "pretext task." This is an auxiliary, self-generated task where the model must predict certain properties of the data that have been intentionally hidden or altered. By solving this pretext task, the model is compelled to learn meaningful underlying structures and representations (embeddings) of the data without human-provided labels. This initial training phase is commonly referred to as pre-training.

For instance, in computer vision, a pretext task might involve:

  • Predicting the relative position of shuffled image patches.
  • Colorizing a grayscale image.
  • Filling in missing parts of an image (inpainting).
  • Learning representations by contrasting different augmented views of the same image, a technique used in contrastive learning methods like SimCLR and MoCo.

In NLP, a well-known pretext task is masked language modeling, famously used by models like BERT. Here, the model learns to predict words that have been randomly masked (hidden) within sentences.

After pre-training on large unlabeled datasets, the model captures rich feature representations. This pre-trained model can then be adapted for specific downstream tasks—such as object detection, image classification, or sentiment analysis—through a process called fine-tuning. Fine-tuning typically requires a much smaller amount of labeled data compared to training a model from scratch, making SSL a key enabler for effective transfer learning.

SSL vs. Other Learning Paradigms

It's crucial to differentiate SSL from related ML paradigms:

  • Supervised Learning: Relies entirely on labeled data, where each input is paired with a correct output. SSL, conversely, generates its labels from the data itself.
  • Unsupervised Learning: Aims to find patterns (like clustering) or reduce dimensionality in unlabeled data without predefined pretext tasks. While SSL uses unlabeled data like unsupervised learning, it differs by creating explicit supervisory signals through pretext tasks to guide representation learning.
  • Semi-Supervised Learning: Uses a combination of a small amount of labeled data and a large amount of unlabeled data. SSL pre-training can often be a preliminary step before semi-supervised fine-tuning.

التطبيقات الواقعية

SSL has significantly advanced Artificial Intelligence (AI) capabilities:

  1. Advancing Computer Vision Models: SSL pre-training allows models like Ultralytics YOLO11 to learn robust visual features from massive unlabeled image datasets before being fine-tuned for tasks like object detection in autonomous vehicles or medical image analysis. Using pre-trained weights derived from SSL often leads to better performance and faster convergence during model training.
  2. Powering Large Language Models (LLMs): Foundation models like GPT-4 and BERT heavily rely on SSL pretext tasks (like masked language modeling) during their pre-training phase on vast text corpora. This enables them to understand language structure, grammar, and context, powering applications ranging from sophisticated chatbots and machine translation to text summarization.

SSL significantly reduces the dependence on expensive labeled datasets, democratizing the development of powerful AI models. Tools like PyTorch and TensorFlow, along with platforms such as Ultralytics HUB, provide environments to leverage SSL techniques for building and deploying cutting-edge AI solutions.

قراءة الكل