LLMsVideo TranslationImage GenerationVideo Generation
AI News

Independent coverage of the latest AI tool updates, releases, and comparisons.

Categories

  • AI LLMs
  • AI Video Translation
  • AI Image Generation
  • AI Video Generation

Company

  • About
  • Contact

Resources

  • Sitemap
  • AI Glossary
  • Tool Comparisons
  • Facts / Grounding
  • llms.txt
  • XML Sitemap
© 2026 AI News. Independent editorial coverage. Not affiliated with any AI company.
Home/Glossary/Fine-Tuning
AI LLMs

Fine-Tuning

Definition

Fine-tuning is the process of further training a pre-trained AI model on a smaller, task-specific dataset to improve its performance on that particular task. It allows organizations to customize general-purpose models for domain-specific applications without training from scratch. Techniques like LoRA and QLoRA have made fine-tuning accessible on consumer hardware.

How It Works

The pre-trained model's weights are updated using gradient descent on the new dataset, typically with a lower learning rate to preserve existing knowledge. Parameter-efficient methods like LoRA freeze most weights and train small adapter matrices, reducing memory requirements by 90% or more. The fine-tuned model retains its general capabilities while gaining specialized knowledge in the target domain.

Key Tools

GPT (OpenAI)Industry-leading large language models powering ChatGPT
$20/mo (ChatGPT Plus)
Llama (Meta)Open-source large language models from Meta
Free (open source)
MistralEuropean AI lab building efficient open and commercial LLMs
Usage-based API

Related Terms

Large Language Model (LLM)TransformerRetrieval-Augmented Generation (RAG)
← Back to AI Glossary