Imun Farmer · Published:
- 예상 수확: 5 min read
LLM Deep Dive: Unpacking ChatGPT (Smart & Economic Usage)
GPT is not just a chatbot. Decades of human language data are baked into every response it gives.
1. Why Is ChatGPT So Smart?
ChatGPT does not “understand” text the way humans do. It calculates which word comes next. After being trained on a massive corpus of text, it generates the most statistically natural continuation of any given sentence. It does not choose words — it runs probability.
The architecture enabling this is the Transformer, first introduced by Google researchers in 2017. Its core is Self-Attention: a mechanism that computes relationships between every word in a sentence simultaneously. In “He lost his phone on the bus, and that made him sad,” the AI infers that “that” refers to “phone” through long-range context — something earlier sequential models could not do.

This architecture made large-scale training possible. GPT-1 started with 17 million parameters and 5GB of training data. GPT-3 scaled to 175 billion parameters and 600GB. GPT-4 is estimated at roughly 1.76 trillion parameters. More scale, more capability.
2. The Evolution of ChatGPT — What Changed Version by Version
| Version | Release | Parameters (est.) | Key Change |
|---|---|---|---|
| GPT-1 | 2018 | 17M | First Transformer-based language model |
| GPT-2 | 2019 | 1.5B | Zero-shot learning, 40GB web text training |
| GPT-3 | 2020 | 175B | Chat, translation, coding — 600GB training |
| ChatGPT (GPT-3.5) | 2022 | — | RLHF applied, conversational interface launched |
| GPT-4 | 2023 | ~1.76T | Multimodal, top 10% on U.S. bar exam simulation |
| GPT-4o | May 2024 | Undisclosed | Text + image + audio unified real-time processing |
| GPT-4.1 | 2025 | Undisclosed | Major gains in coding and long-context analysis |
| GPT-5 series | 2025~ | Undisclosed | Shift toward agentic, autonomous task execution |
GPT-4 scored in the top 10% on a simulated U.S. bar exam. ChatGPT (GPT-3.5) had scored in the bottom 10% on the same exam just months earlier. That is a staggering leap.
The training method worth highlighting is RLHF (Reinforcement Learning from Human Feedback). Human evaluators rate AI responses, and the model is refined based on those ratings. Humans coach the AI like a teacher. This is why ChatGPT conversation flows more naturally than competing models.
3. When ChatGPT Lies — Understanding Hallucination
If you use ChatGPT long enough, you will encounter it: a confident, fluent, completely wrong answer. A nonexistent paper title. A fabricated statistic. A quote that no one ever said. This is hallucination.
The cause is structural. ChatGPT does not recall facts — it generates the most probable next sentence. On data-rich topics it usually gets things right; on niche or recent information, it fabricates plausible-sounding text. It does not stop when it does not know. That is the problem.
Three practical ways to reduce hallucination:
- Assign a role: “You are a rigorous fact-checking editor” — giving the model a responsible identity makes it more cautious.
- Demand sources: “Include links and cite your sources” — this prompts the model to flag uncertainty.
- Specify a timeframe: “Based on data from 2025” — time-anchoring reduces the drift into outdated or invented information.
Never treat a ChatGPT answer as a final source. Treat it as a first draft, always.
4. Pricing — When Does Paying Make Sense?
| Plan | Monthly Cost | What You Get | Best For |
|---|---|---|---|
| Free | $0 | Basic model, usage limits | Casual use, first-time users |
| Go | $8 (~₩11,000) | Mid-tier performance | Occasional professional use |
| Plus | $20 (~₩29,000) | Latest models, priority access | Daily work users |
| Pro | $200 (~₩290,000) | Unlimited, all advanced models | Developers, researchers, power users |
Honest advice: if you have not seen the “usage limit exceeded” message three or more times a week, the free tier is enough.
The real value of Plus is not additional features — it is speed and stability. During peak hours (evenings), the free tier slows or blocks. Plus users get fast-lane access. If you work with ChatGPT daily and rely on file analysis or coding assistance, Plus is worth more than its $20/month cost.
5. Prompting — Better Questions, Better Answers
Asking ChatGPT “write a blog post” is like walking into a restaurant and saying “give me something tasty.” The result is entirely unpredictable.
The gap between effective and ineffective ChatGPT users comes down to one thing: the prompt. The formula that works:
[Role] + [Purpose] + [Constraints] + [Example]
You are a senior marketing strategist with 10 years of experience.
Create an SNS content strategy for a small startup.
Budget: under ₩1M/month. Focus on Instagram and YouTube Shorts.
Include a weekly posting schedule with at least 2 posts per channel.
This framing locks in context and produces specific, usable output.
Additional prompt patterns worth keeping:
- Format control: “Present this as a table”, “Bullet points, max 5”, “Under 500 characters”
- Step segmentation: “Complete only step 1 for now”
- Feedback loops: “Expand point 3 with more detail”
- Few-shot examples: “Write it in this style:” + sample text
Save your best prompts as templates. Roles and constraints rarely change — only the topic does. Reusing tested prompts saves significant time.
6. Where ChatGPT Is Heading in 2026
Since late 2025, ChatGPT’s direction has shifted. It is moving from a “question-and-answer” tool to an agent — one that searches, opens files, runs code, and manages schedules autonomously.

Upcoming updates for early 2026 include Voice Mode 2.0 (with real-time emotional recognition), Canvas 3.0 (collaborative live document editing), and a third-party plugin marketplace expansion. GPT-4o entered retirement in February 2026, with GPT-5-series models taking over as the default.
One thing is clear in 2026: the productivity gap between people who use ChatGPT as a search replacement and those who integrate it as a work partner is widening fast.
References
- AWS - What is a Large Language Model (LLM)? (https://aws.amazon.com/ko/what-is/large-language-model/)
- Samsung SDS - ChatGPT Technical Analysis White Paper Vol. 1 (https://www.samsungsds.com/kr/insights/chatgpt_whitepaper1.html)
- Elastic - What are Large Language Models? (https://www.elastic.co/kr/what-is/large-language-models)
- Intel - What are Large Language Models? (https://www.intel.co.kr/content/www/kr/ko/learn/large-language-models.html)
- OpenAI Help Center - Model Release Notes (https://help.openai.com/ko-kr/articles/9624314-model-release-notes)
- DataCamp - GPT Models and LLM Training (https://campus.datacamp.com/ko/courses/intermediate-chatgpt/)
- STEMentor - How Was ChatGPT Trained? (https://stementor.tistory.com)
- Brunch - ChatGPT Hallucination Errors (https://brunch.co.kr/@skychang44/515)
- GamsGo - ChatGPT Pricing Comparison 2025 (https://www.gamsgo.com/ko/blog/chatgpt-pricing)
- ITGIT Blog - ChatGPT Plus Worth It in 2026? (https://itgit.co.kr/chatgpt-plus-free-vs-paid-2026-review/)
- AIBase - GPT-4o Retirement and ChatGPT Major Update Feb 2026 (https://aibase.it/community/123)
- Sigmine - ChatGPT Prompt Writing 2026 (https://sigmine.ai/blog/chatgpt-prompt-writing-2026)
- FindSkill - 10 ChatGPT Prompt Tips Most People Don’t Know (https://findskill.ai/ko/blog/)
- Namu Wiki - GPT-4 (https://namu.wiki/w/GPT-4)
- Sparta Coding Club - 2025 ChatGPT Model Comparison (https://spartacodingclub.kr/blog/chatgpt-model-2025)
#ChatGPT #OpenAI #LLM #AI #SmartWork #Productivity #TechBlog #Prompting #AIAgents
Contribution to this Harvest
내용이 유익했다면 물을 주어 글을 성장시켜주세요!
(0개의 물방울이 모였습니다)