I’m reliably told that we are in the age of artificial intelligence. I’ve also been informed that nobody gives a shit whether you think these systems are artificial or whether they are intelligent. I’ve also been told that these are debates for philosophers, linguists, and other people with way too much time on their hands. I’m not saying they’re jobless, but there’s a certain horniness for these debates among these people.
Look, regardless of what you call them (artificial intelligence, generative AI, or just large language models) and whether you agree or disagree that they are intelligent, the fact is that in a vast majority of domains, these models are much better than human beings. That’s just the reality. You may not like it, and you may not accept it—that’s fine. All I can say is that facts don’t give a shit about your feelings.
The other fundamental reality is that not thinking about generative AI is a luxury that very few people can afford. Not thinking about how it affects our profession, our lives, our world, and what it means for our personal trajectory is a costly mistake.
Now, there is no shortage of commentary about generative AI: hot takes, idiotic views, and loud opinions. While it’s good to listen to “smart” people and their views about AI, the best way to have a reasonable opinion about these technologies is to actually use them.
I understand the temptation of listening to a couple of loudmouths and passing off their opinions as yours or forming opinions that conform to your priors, especially if AI threatens your job, identity, or sense of self. But that’s not a good idea. So instead of adding to the noise with another full-fledged post or hot take, I thought I’d just jot down a few fragmented thoughts about generative AI.
Now, my default model for thinking about my beliefs is to consider them as tentative. I’m stupid, trying to be less stupid. These are beliefs and opinions I have based on the evidence I have as of today, and I can change my mind tomorrow if I come across new evidence or frames of thinking. So, please keep this caveat in mind and make up your own damn mind. Don’t believe everything I say.
1
There is no shortage of hot and loud takes about AI. It’s easy to read and listen to all these people and delude yourself into thinking you have a “constructive view” about this technology. But you’re not just being intellectually dishonest; you’re also being downright stupid. No matter where you end up on the spectrum of whether AI is useful, hyped up, useless, or a civilization killer, not using it and having an opinion about it is disingenuous. If you haven’t used it, you’re lying to yourself.
2
In October 2024, Tyler Cowen tweeted:
I’ve grown not to entirely trust people who are not at least slightly demoralized by some of the more recent AI achievements.
I read that in this post, and I couldn’t agree more. After using these tools for close to two years for different use cases, I’ve made peace with the fact that these tools are simply better than me in pretty much everything I do. In fact, I feel liberated.
If you aren’t at least a little demoralized by what large language models can do, you’re not thinking straight and you’re just burying your head in the sand.
3
Even the oldest LLM models still know more than you’ll ever know. I keep using this phrase: large language models have the collective knowledge of humanity that’s on the internet. Maybe that’s a slight overstatement, but directionally it’s true.
Which means while people argue about imagined sci-fi futures like AGI, ASI, or whatever stupid term they’re concocting, the reality is that for normal people like you and me, artificial general intelligence is already here in a practical sense. It’s in your pocket right now. It knows more about everything than you possibly ever will.
If that’s not artificial general intelligence, I don’t know what is.
The problem is that as soon as something magical arrives, we get used to it, and then it ceases to be magic. The same thing is happening with ChatGPT, Claude, and Gemini. We’re normalizing the extraordinary. Wretched creatures we are.
4
Unless you are in the top 5–10% of your profession, if your conclusion is anything other than “these AI models are better than me in a range of tasks,” you clearly haven’t used them or thought about them well enough. #Cope
5
I think this is a golden age for hobbies, side projects, and following your peculiar passions.
These models are brilliant at coding, and it’s now possible to build almost any weird, wacky, wonderful idea you’ve ever wanted. Unless you’re building a complicated CRM or an enterprise product, AI coding tools are good enough.
For example, I don’t know a lick of coding, and I built simple websites like dhwani.ink, paperlanterns.ink, rabbitholes.garden, fromthedumpsterfire.com and redesigned bebhuvan.com, all with AI’s help.
Without tools like OpenAI’s Codex and Claude Code, I might have done them through WordPress or Substack, but they’d never have been this good. AI has become a force multiplier, helping me create and curate far more than I could before.
My gut says we’re entering a golden age of side projects in which we’ll see small, weird, and wonderful things, reminiscent of the early days of the internet. A lot of them will be slop and useless, but that’s a good thing.
6
In many cases, I’ve realized that the biggest block to using these AI tools, apart from the fear of being displaced by them, is a fundamental lack of imagination. I think that there’s an epidemic of unthinking in society. Most people would rather do anything than think, like swipe reels.
So when they’re given a tool with the collective knowledge of humanity at its disposal (a capable butler that can do a hundred things), they end up plagiarizing essays or writing shitty LinkedIn posts instead of doing something fun or useful. That’s a profound tragedy. Dumb and wretched creatures we are.
7
Here’s another use case most people miss: AI as a reading companion for difficult texts. Whenever I’m reading something challenging, I’ll pause and ask Claude or ChatGPT for context. It’s like having an expert sitting next to you while you read.
For example, I was recently reading Audrey Truschke’s India: 5000 Years, and I started asking Google Gemini a series of questions like, “Where did the first Indians actually come from? What trade links did the Indus Valley Civilization have with Mesopotamia? How do we know what we know?”
These weren’t questions I could easily Google because they required context, synthesis, and connecting dots across different fields. Thanks to Gemini, my reading experience became richer and deeper. This is the opposite of intellectual laziness. It’s using AI to go deeper into material you’re already engaging with.
8
A while back, I wrote a post called Who Are You Without a Job?
For most people, their identities revolve around their work. Jobs mediate everything from marriage to where you live, what you buy, and what you value. So imagine a scenario where your job gets automated and there’s no alternative career. Who are you then? What do you do with your eight to ten hours a day?
Most people are wedded to their jobs and have little to no life outside them. On balance, I think a lot of jobs will go away not because of AGI or sci-fi nonsense, but because even if AI progress stops today, large language models can already automate a majority of knowledge jobs. That’s a fundamental reality people don’t want to face.
9
You can be a pundit who says things like “LLMs are bad; they hallucinate; they can’t count strawberries,” or you can actually use them to build useful things for others or for yourself. They can be both useless and useful at the same time and this breaks people brains.
As flawed as LLMs are, they are better than most normal people at a range of digital tasks. And most of us are normal. They’ve helped me build ideas I’ve had for years. So if you think they just hallucinate or vomit mediocrity, you’re not talking from your mouth but from the hind part of your lower abdomen.
10
Sure, as things stand today, AIs are best used as assistants rather than autonomous agents in all domains because they can’t yet do everything due to organizational, legal, and knowledge constraints. But if you aren’t at least imagining a scenario where they can do things autonomously, you’re in for a rude surprise.
11
AI might hallucinate, but don’t make the mistake of thinking that you’re smart just because it hallucinates. The average human hallucinates more than AI. Just listen to the average person around you. The opening in the hind part of their lower abdomen does more talking than the mouth.
12
You think AI-generated writing and art may be slop? If yes, please do remember that you aren’t fucking Da Vinci, Michelangelo, Mozart, Hemingway, or Picasso. So calm the fuck down, bro.
13
Yes, there’s a lot of junk that AI generates. But to label everything AI produces as slop is a terrible mistake.
Used thoughtfully, AI can produce excellent writing across a range of topics, far better than most journalists, bloggers, or “thought leaders.”
Imagine someone who’s brilliant in their field but terrible at writing. With AI, they now have the best stenographer, editor, and publisher at their fingertips for $20 a month. It’s not that AI is bad at writing; it’s that you don’t know how to use it.
14
As things stand today (and this will quickly become irrelevant with each new model release), I think of AI as a creative partner, a fantastic butler, a stenographer, and an editor.
It takes over the grunt work, saves time, and helps me do more. I could do most of these things without AI, but now I can do many more of them, and better.
15
Going back to my first point: unless you’re using AI deeply (pushing it hard, testing its limits, finding its weaknesses), you’re not using it right.
If your opinions come from superficial use and you conclude that “AI is mediocre” or “it hallucinates too much,” you’re just another loudmouth.
AI is helping me write differently. I hate typing but love writing. Now I dump my raw voice notes and let AI organize, edit, and structure them. Every thought and example is mine. AI is just the editor. It’s helping me find a new tone, a new rhythm, and most importantly, it’s helping me write more. I think edit the posts to shape them up and get rid of the mistakes and the fluff and filler that AI adds.
I wrote this post the same way.
16
There’s an uncomfortable mental prompt worth keeping always running in the background of your mind. It never gives you a clean answer, but you have to keep asking it:
What are these tools doing to my cognitive abilities?
For example, if you use AI to write more, is it sharpening your edge or dulling your ability to write in the absence of AI? Or is it genuinely enhancing your capabilities as a writer—adding something to your style, your range, and the types of writing you can do?
Are you genuinely protecting something precious by not letting AI touch your writing? That’s fine if it’s a deliberate choice.
But you have to keep asking: What would I do in the absence of AI? How do I want to use this? As a partner, an editor, a co-creator, or a full creator? In what contexts does AI’s role change?
I don’t have settled answers. The line shifts depending on the task, the day, and what I’m trying to do. But not asking the question? That’s where the real danger is.
These are working notes and questions I have at the moment. I’m sure I’ll change my mind tomorrow.
John Maynard Keynes said:
“When the facts change, I change my mind - what do you do, sir?”
I annoy people by writing about it on Substack.
Image: The Nightmare by Johann Heinrich Füssli