MoonNote

Pretext task 본문

Study/Machine Learning

Pretext task

Kisung Moon 2021. 12. 1. 14:04

Generally, computer vision pipelines that employ self-supervised learning involve performing two tasks, a pretext task and a real (downstream) task.

  • The real (downstream) task can be anything like classification or detection task, with insufficient annotated data samples.
  • The pretext task is the self-supervised learning task solved to learn visual representations, with the aim of using the learned representations or model weights obtained in the process, for the downstream task.

 

컴퓨터 비전에서 self-supervised learning을 적용한다고 하면, pretext task와 real (downstream) task로 나눌 수 있다.

- downstream task는 충분하지 않은 label data의 classification 또는 detection task 와 같은 해결하고자 하는 어떤 task이고

- pretext task는 downstream task을 위해 학습된 표현 또는 모델의 weight을 사용하는 것을 목표로 visual representations을 학습하기 위해 활용되는 self-supervied learning task이다.

 

Reference

https://atcold.github.io/pytorch-Deep-Learning/en/week10/10-1/

Comments