일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | |||||
3 | 4 | 5 | 6 | 7 | 8 | 9 |
10 | 11 | 12 | 13 | 14 | 15 | 16 |
17 | 18 | 19 | 20 | 21 | 22 | 23 |
24 | 25 | 26 | 27 | 28 | 29 | 30 |
- cl
- nerf
- CS224N
- CS231n
- 컴퓨터비전
- RCNN
- 강화학습
- Faster RCNN
- rl
- NLP #자연어 처리 #CS224N #연세대학교 인공지능학회
- PytorchZeroToAll
- GAN #StyleCLIP #YAI 11기 #연세대학교 인공지능학회
- 컴퓨터 비전
- 3D
- YAI 8기
- YAI 11기
- Fast RCNN
- VIT
- Googlenet
- cv
- NLP
- transformer
- YAI
- CNN
- YAI 9기
- 자연어처리
- Perception 강의
- YAI 10기
- 연세대학교 인공지능학회
- GaN
- Today
- Total
목록전체 글 (57)
연세대 인공지능학회 YAI
CS231N, Spring 2017 : Lecture 15,16 https://www.youtube.com/playlist?list=PLC1qU-LWwrF64f4QKQT-Vg5Wr4qEE1Zxk Stanford University CS231n, Spring 2017 CS231n: Convolutional Neural Networks for Visual Recognition Spring 2017 http://cs231n.stanford.edu/ www.youtube.com *YAI 11기 김남훈님께서 "기초심화" 팀에서 작성하신 글입니다. Lecture 15 : Efficient Methods and Hardware for Deep Learning 현재의 AI가 직접적으로 실생활에 응용되려면 다음과 같은 ..
CoCa : Contrastive Captioners are Image-Text Foundation Models https://arxiv.org/abs/2205.01917 CoCa: Contrastive Captioners are Image-Text Foundation Models Exploring large-scale pretrained foundation models is of significant interest in computer vision because these models can be quickly transferred to many downstream tasks. This paper presents Contrastive Captioner (CoCa), a minimalist design..
CS224N, Winter 2023 : Lecture 9, 10 https://web.stanford.edu/class/cs224n/index.html Stanford CS 224N | Natural Language Processing with Deep Learning Natural language processing (NLP) is a crucial part of artificial intelligence (AI), modeling how people share information. In recent years, deep learning approaches have obtained very high performance on many NLP tasks. In this course, students g..
StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators https://arxiv.org/abs/2108.00946 StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators Can a generative model be trained to produce images from a specific domain, guided by a text prompt only, without seeing any image? In other words: can an image generator be trained "blindly"? Leveraging the semantic power of large sca..
StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery https://arxiv.org/abs/2103.17249 StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery Inspired by the ability of StyleGAN to generate highly realistic images in a variety of domains, much recent work has focused on understanding how to use the latent spaces of StyleGAN to manipulate generated and real images. However, discovering semanti..
VAE 구현 Github 주소 : https://github.com/SUNGBEOMCHOI/pytorch-VAE GitHub - SUNGBEOMCHOI/pytorch-VAE Contribute to SUNGBEOMCHOI/pytorch-VAE development by creating an account on GitHub. github.com ** YAI 9기 최성범님이 논문구현팀에서 작성하신 글입니다. About VAE VAE consists of two main components: an encoder and a decoder. The encoder takes in data points and maps them to a lower-dimensional representation, known as a ..
YAI 11기 조믿음님이 기초부원팀에서 작성한 글입니다 Study Material PyTorchZeroToAll Lecture 1 ~ 11 모두를 위한 딥러닝 시즌 2 Lab 1 ~ 6 Index What is Machine Learning? Linear Model Linear Regression Gradient Descent Back Propagation Chain Rule Review Logistic Regression Binary Prediction Sigmoid Function Binary Cross Entropy Loss Softmax Classifier Softmax Function PyTorch Basics Discussion What is Machine Learning/Deep Learni..
U-Net: Convolutional Networks for Biomedical Image Segmentation ** YAI 10기 안정우님이 비전논문기초팀에서 작성한 글입니다. Abstact Present a network and training strategy that relies on the strong use of data augmentation Use available annotated samples more efficiently The architecture consists of a Contracting path and a symmetric Expanding path Contracting path Captures content Expanding path Enables precise local..