Notes¶
- 2025-09-29-tbd-how-to-train-state-of-the-art-models-using-torchvision-s-latest-primitives
- 2025-09-29-tbd-trivialaugment-tuning-free-yet-state-of-the-art-data-augmentation
- 2025-09-29-tbd-when-does-label-smoothing-help
- 2025-10-05-tbd-an-image-is-worth-16x16-words-transformers-for-image-recognition-at-scale
- 2025-10-07-tbd-deep-residual-learning-for-image-recognition
- 2025-10-25-tbd-dropout-a-simple-way-to-prevent-neural-networks-from-overfitting
- 2025-11-02-tbd-batch-normalization-accelerating-deep-network-training-by-reducing-internal-covariate-shift
- 2025-11-04-tbd-cramming-training-a-language-model-on-a-single-gpu-in-one-day
- 2025-11-04-tbd-very-deep-convolutional-networks-for-large-scale-image-recognition
- 2025-11-12-tbd-denoising-diffusion-probabilistic-models
- 2025-11-16-tbd-alexnet-imagenet-classification-with-deep-convolutional-neural-networks
- 2025-11-19-tbd-gradient-based-learning-applied-to-document-recognition
- 2025-11-27-tbd-going-deeper-with-convolutions
- 2025-11-30-tbd-densely-connected-convolutional-networks
- 2025-12-02-tbd-aggregated-residual-transformations-for-deep-neural-networks
- 2025-12-04-tbd-squeeze-and-excitation-networks
- 2025-12-08-tbd-efficientnet-rethinking-model-scaling-for-convolutional-neural-networks
- 2025-12-10-tbd-unsupervised-domain-adaptation-by-backpropagation
- 2025-12-18-tbd-swin-transformer-hierarchical-vision-transformer-using-shifted-windows
- 2025-12-20-tbd-a-convnet-for-the-2020s
- 2025-12-23-tbd-adam-a-method-for-stochastic-optimization
- 2025-12-27-tbd-weight-normalization-a-simple-reparameterization-to-accelerate-training-of-deep-neural-networks
- 2026-01-02-tbd-layer-normalization
- 2026-01-04-tbd-group-normalization
- 2026-01-04-tbd-mixed-precision-training
- 2026-01-05-tbd-efficient-estimation-of-word-representations-in-vector-space
- 2026-01-07-tbd-learning-phrase-representations-using-rnn-encoder-decoder-for-statistical-machine-translation
- 2026-01-08-tbd-sequence-to-sequence-learning-with-neural-networks
- 2026-01-10-tbd-attention-is-all-you-need
- 2026-01-10-tbd-neural-machine-translation-by-jointly-learning-to-align-and-translate
- 2026-01-11-tbd-deep-contextualized-word-representations
- 2026-01-12-tbd-improving-language-understanding-by-generative-pre-training
- 2026-01-13-tbd-bert-pre-training-of-deep-bidirectional-transformers-for-language-understanding
- 2026-01-15-tbd-language-models-are-unsupervised-multitask-learners
- 2026-01-17-tbd-exploring-the-limits-of-transfer-learning-with-a-unified-text-to-text-transformer
- 2026-01-21-tbd-language-models-are-few-shot-learners
- 2026-01-22-tbd-density-estimation-using-real-nvp
- 2026-01-24-tbd-scaling-laws-for-neural-language-models
- 2026-01-25-tbd-chain-of-thought-prompting-elicits-reasoning-in-large-language-models
- 2026-01-27-tbd-training-language-models-to-follow-instructions-with-human-feedback
- 2026-01-28-tbd-llama-open-and-efficient-foundation-language-models
- 2026-01-29-tbd-lora-low-rank-adaptation-of-large-language-models