The paper argues that zero-shot prompting on a instruction-tuned small language models outperform LLMs systems.
- Instruction-tuning actually improve zero-shot learning performance.
- Mostly tested on FLAN, but show results throughout with GPT-3 and on few reading comprehension dataset.
Honorable mentions include prompt tuning or few-shots prompting