What are LLMs
LLMs for Non Technical Folks

Entrepreneur, Founder, Mentor. Also runs a Registered NGO. Visit my website: sandeepgokhale.com
The Big Day
The goal of this blog is to help non technical people understand what LLMs are and how they work.
1st December 2022 - Remember this date? Sam Altman (from OpenAI) Announced ChatGPT to the world.

In the next 5 days they crossed 1 Million Users.

Since then the world has not been the same.
What is ChatGPT
Image a Magical Mystery Ball. You ask it any question and it will “generate” you an answer. Think of that Mystery ball as ChatGPT. Technically we call them as AI Assistants.

ChatGPT (Or AI Assistants in general)
Think of ChatGPT like a young student but very well “trained” on a lot of topics. This student can take questions ranging from a wide variety of topics (You can ask it almost anything) and this student will give you answers. There are times where it can make mistakes as well (So, be cautious).
When people started using ChatGPT, they found is quite useful at a lot of things and dumb on few things (It was bad at Basic Mathematics and few other topics). With Time, the student improved and is now making less and less mistakes (Still, be cautious).
The Magic Ingredient?? LLMs
Similar to the brain of a student, AI Assistants have is something called LLM - Large Language Model.
Large ? Language Model ? Let me explain.
Language Models? They are not new and you have been using them. Say you are messaging someone on your phone and you type the word “Good” - Next three suggestions come and they may be “Morning“, “Day“ or “One“ - It depends but something like this could be suggested.
Large ? Its Quantitative - The Language Models are trained on a “huge” amounts of data.
In simple terms - LLMs are nothing but Language Model but very large.
What happens inside a LLM
To understand these LLMs, we need to understand how a human brain works. When we are born as babies, we do not know a lot of things. We learn by doing, observing, failing and what not. Humans Brain is a fascinating thing in itself.
Inside the human brain, there are billions of interconnected neurons, each capable of emitting electrical pulses that will “somehow” enable us to think, feel and take decisions. So, every time we learn something new, new connections form.
Similarly, LLMs are also trained on massive data. Data as in books, articles, websites, conversations, and even code. LLMs learn by observing the patterns in the data, similar to how we humans learn.
If you peak Inside a LLM, Instead of biological neurons, we have a lot of inter-connected artificial neurons (A.K.A Neural Network).
An Artificial Neuron is simply a Math formula. They take numerical input and generate a numerical output. Similar to our brain, Output of one neuron, becomes the input of another neuron and this chain continues (Its a super long chain) before a final result is produced.
When you input a question say like - What is the fastest way to reduce weight? This question will be split into multiple chunks, each chunk is converted to a token (via Tokenization) and these numbers are passed through a network of neurons and finally a result will be produced.
Fun Fact
Tokenization was the reason why LLMs were making silly mistakes.

😁 😁
Well, issues and mistakes like this are now corrected. So, if you try this, it will give the correct answer.
Another Fun Fact
Shocking Right ? But it is true.
Consider this: Let's say you ask three kids to draw a "house".
Kid 1: Draws a city apartment with tall buildings.
Kid 2: Draws a village hut with trees around.
Kid 3: Draws a beach villa with waves and sunshine.
The question was same, but the answers were different.
Each kid drew from their own experience and imagination.
That is how LLMs behave too. Ask a question to a LLM and it "generates" an answer. We cannot for sure tell why "this" answer was "generated". And below is the reason.
LLMs really don’t “store” or memorize “knowledge”. They develop patterns and 2representations during training and “somehow” are able to produce amazing results. This is why you keep hearing experts say - We don't even truly know the capabilities of LLMs.
What fascinates me most is how much we still don’t know about models we use every day. Each discovery feels like peeling one more layer off a mystery that keeps deepening.
Traditional Applications Vs Machine Learning Applications
Traditional Applications
With Traditional Applications, there is no “Training”. Instead, There is a If Else Statement, There is a For Loop, a Switch Statement, some exception handling etc.
The output is always predictable (if you are skilled enough to read the code and debug the program)
Machine Learning Applications
Recall the Kid’s drawing example above ? That is how ML Applications behave. The output completely depends on the "training". And during the training process, the models learn their own strategies to solve problems.
So, while LLMs can also “draw a house,” but their training shapes what kind of house they imagine.
AI Lingo or Jargons
When you read about AI, there are some words that gets used and as a beginner or someone from a non technical background, it could throw you off. Listing a few common ones below.
| Jargon | Meaning |
| AI Assistants | ChatGPT, Gemini, Perplexity, Claude etc. |
| LLM | Brain of these AI Assistants. |
| Some famous LLMs | GPT 4, GPT 5, Sonnet, Opus, Gemini, Nano, etc. |
| Prompt | The questions you ask the AI Assistant |
| Token | The questions you ask are converted to token |
| Context Window | How much the LLMs can remember in a conversation |
| Hallucination | When the Assistant makes a confident mistake |
| Agent | AI sub system that can take action by itself |
| Memory | What AI Assistants Or Agents can remember |
| Training | Teaching the AI Assistant |
Training Data quickly became Stale.
Training an LLMs involved great investment of time/money and one of the biggest problem initially was Training data cut off date.
Here is an old screenshot from ChatGPT.

Earlier, you could NOT use ChatGPT (or any AI Assistant) about any information that was after the cut off date.

This is now resolved, huge thanks to recent advancements like:
Web Search (AI Assistants can browse the internet)
Computer Use
APIs
AI Agents
MCP Server From Claude
Skills From Claude
Apps SDK and more
These advancements have made AI super interesting. In fact, Today you can book your tickets, Work with Figma, Use Canva or listen to your favorite song on Spotify and more, All within ChatGPT.

And all this progress is just in less than 3 years of ChatGPT.
Extracting the best from AI Assistants
Remember I told you that an AI Assistant is just like a young student.
Once we train the student and the student is ready, when we assign work to the student, we need to provide proper information, instructions and context and correct mistakes if any. This becomes our responsibility.
Imagine these two instructions to the Student:
Make Coffee
Make Sugar Less Coffee
While both are instructions to make coffee, if you are like me, you would probably care only for the later 🍵.
Similarly, If you tell an AI Assistant to:
Write an essay about nature.
Write a 300-word essay about nature, in a conversational and motivational tone.
You can easily decide which one is better.
Just like a student, AI Assistants also need detailed Instructions and context to produce the best results.
Fascinating time to be alive
Do you know what happened when muscles were replaced by machines (industrial revolution)?
Do you know what electricity did to us?
Can you think of what Internet did to us?
Now, Is the time for the AI Revolution
We are living in a fascinating moment of technology change. It's for the first time I am excited and also uncertain. Many of us have these digital Assistants that help us with:
Writing
Brain Storming
Researching
Making decisions
Writing Code
Designing
Testing
Planning and more..
Learn AI
What you can do today is to learn AI. When I say learn AI, its not necessarily on a Neuron’s layer. Learn to use it, adapt to your day to day work and use it like a tool.
Once you understand the basics, let your curiosity drive your path.
Let's Connect
Hi, I’m Sandeep Gokhale, and I'm passionate about building high-performing teams at my company, Techvito and I write about Technology, People, Processes and some more stuff.
If you’re exploring AI, AI Agents or MCP or looking for a trusted technology partner to:
Build secure, production-ready MCP servers and APIs,
Guide your business through zero-downtime cloud migrations,
Accelerate your goals with clarity, speed, quality, and security,
Work with a team that values reliability, transparency, and trust,
…then me and my team are here, ready to help you make it happen.
Feel free to connect with me on LinkedIn and Twitter.
Until Next time!



