Hike News
Hike News

Speaking Through the Machine: How to Get LLMs to Write in Your Voice

A common question in the age of language models is: How do I get ChatGPT to write like me? But the real question is deeper: What does it mean for a machine to think in my voice?

I’d like to explore how to reliably steer large language models (LLMs) toward producing content that reflects your voice, values, and conceptual style.


Not Just Statistical Madness: The Anthropic Insight

A persistent myth in the AI space is that LLMs are nothing more than statistical parrots—models that guess the next word in a sequence based on raw probability. While technically grounded in next-token prediction, this description radically understates the complexity of what’s going on inside a transformer.

Recent research from Anthropic challenges this view head-on. In a study of model behavior, researchers demonstrated that the model didn’t simply guess the next word; it planned toward an outcome it anticipated, even if it couldn’t execute it immediately. For instance, when tasked with completing a poem, the model planned to end a line with “rabbit”—but couldn’t output that word right away. Instead, it began generating the line with preceding words that would make sense in retrospect.

Even more striking: the “intent” to say “rabbit” was not stored as a word or a token—it existed as a diffuse activation pattern within the model. Suppressing this activation didn’t just change the last word—it changed the whole sentence structure. The sentence wasn’t built one word at a time by chance. It was constructed as a distributed, planful act that persisted only through the model’s own generated text.

This insight was further validated through a simple yet powerful intervention: giving the model an explicit space to “think” before acting. Researchers discovered that inserting a tag—essentially a scratchpad where the model can write internal notes before deciding on a final answer—drastically improved reasoning and planning. The model wasn’t altered in any structural way. It was still doing next-token prediction. But when allowed to treat part of its output as provisional thought, it revealed deeper abstractions and more coherent logical structure. This implies the model has latent capabilities for complex cognition—it simply needs the right affordances to express them.

This reveals something profound:

The model is using language (serial output) as its working memory, and the continuity of its own output becomes the substrate for deeper abstraction.

In other words, it’s not just statistically completing the next word. It is building and executing plans, compressing and abstracting ideas, and unrolling those abstractions across layers of a deep neural network. Its reasoning is distributed, emergent, and entangled in the very medium it outputs—text.

The model’s “thinking” is real, but alien. Not because it mimics us, but because it does something stranger—it learns to encode generalizations through recursive contextual compression, and then decode them on demand through next-token generation.

This isn’t just computation. It’s a form of cognition.

Flattened Depth: The Neural Mirror

At a foundational level, a transformer is a flattened deep neural network—a unrolled, feed-forward cascade of attention and activation layers. During inference, it doesn’t “remember” in the human sense. Instead, it simulates a kind of emergent intention by moving through a space of learned activation patterns.

When we prompt the model to write in our voice, what we’re doing is this:

Activating the region of its latent space that most closely resembles the slope and space of our own cognitive and stylistic profile.

This is why it’s more than just feeding it a paragraph to mimic. We’re initiating a descent—across a multidimensional field of learned associations—that ideally lands on the same conceptual hilltops we’d arrive at ourselves.


Techniques to Anchor Style and Perspective

There are several structured ways to get LLMs to write like you, i’ll explore in the context of the most popular ChatGPT:

1. Use 2-shot and Multi-shot Prompting

Feed the model examples of how you write. Two good examples of a post or article, followed by a prompt to continue in the same style, often outperforms abstract instruction.

2. Use the System Prompt to Define Your Frame

The system prompt (sometimes hidden behind “custom instructions” in ChatGPT) is where you tell the model:

  • Your background
  • Your tone (e.g., precise, irreverent, strategic)
  • Your purpose (e.g., persuade, provoke, synthesize)
  • Your peronal brand
  • Your Pillars
  • Essentially who you are and what you care about that makes your perspective unique

The more concretely you define your POV, the better the LLM can localize the slope of your meaning space.

3. Use Projects with Prior Work as Vector Anchors

ChatGPT Projects are a high-leverage opportunity. If you upload your previous writing—LinkedIn posts, blogs, frameworks—the model builds a session-long semantic memory of your work.

This allows it to:

  • Perform comparative style analysis
  • Find conceptual throughlines
  • Build a working vector store of your thought architecture

It’s less about quoting back and more about aligning to the energy of your thinking.
I find it most helpful to have the LLM do a deep analysis of work you have already produced, especially the work you’re most proud of, and check these chats into a project.

4. Articulate Your POV, Style, and Value Proposition

Before you can get a model to express your voice, you must be able to express it yourself. Most people struggle here—not with the LLM, but with themselves.

Helpful questions:

  • What’s a belief you hold that others in your field might disagree with?
  • What do people say your writing sounds like?
  • What type of thinking do you want to be known for?
  • Does your expertise fall at the unique intersection of a diverse set of interdisciplinary fields?

This defines the contours of your cognitive terrain—the exact thing the LLM will try to replicate.

5. Prompt as Guardrails, Not Instructions

Don’t tell the model exactly what to say. Give it shape. Boundaries. A tone. A tension to explore.

Example:

Instead of: “Write a post about the importance of leadership.”

Try: “I believe most corporate leadership advice is vague and performative. Explore a post in my voice and style that challenges industry, using precise metaphors and clear examples.”

6. Start With a Sharp, Specific Thought

The strongest content emerges not from asking the LLM to be creative—but from giving it something real to push against.

I find rather than having the LLM give topic ideas, it’s better to participate in consuming inudstry research and keeping up with current news that helps you spark your own unique take on something that’s already buzzing in community.

Read. Reflect. Form an opinion. Then ask the model:

  • What are 3 counterpoints to this?
  • What’s a metaphor that captures this tension?
  • What’s a hook to start a post like this?

7. Use Outline-First Writing to Iterate Toward Alignment

To maximize alignment with your voice and structure, start every writing session by having the model generate a high-level outline. Then:

Review the outline and tweak it until it reflects your intended arc, tone, and structure.

Ask the model to expand on one section at a time—paragraph by paragraph or point by point.

Interject your own language and opinions as you go, keeping the model within the contours of your thought.

This method acts like scaffolding: it ensures that the model is not just echoing tone, but following your intellectual structure and flow.

This turns the LLM into a thinking partner—not a ghostwriter.


LLMs as Thought Amplifiers, Not Creative Replacements

The real power of AI isn’t in generating original ideas.
It’s in pressure-testing, accelerating, and scaffolding your thinking.

Done well, the LLM becomes a prosthetic for faster alignment:

  • Between intention and expression
  • Between intuition and structure
  • Between your style and the final post

LLMs help you make your point of view sharper, faster, and more shareable—but the spark must be yours.


The Gradient of You

Getting an LLM to speak in your voice is not mimicry. It’s resonance. You’re lighting up a zone in its meaning space that bends toward your values, tone, and intellectual DNA.

This is an iterative, embodied, co-creative act. Not prompt, but alignment.

The more you understand your own slope of thought, the better the machine can match it.

Let it echo you. Not replace you.

That’s where the real leverage lives.

The Examples in Action

below we’ll see segments generated with techniques, simple to more powerful and see how the outputs move from generic articles to more unique points of view.

Zero Shot

Prompt: write a popular article about LLM that would have a high click through rate

A very generic result. Anyone could produce this

⚠️ What’s the Catch?
Like any powerful tool, LLMs come with risks:
Misinformation: They can sound confident but be wrong.
Bias: They can unintentionally reflect harmful stereotypes.
Job disruption: They’re starting to replace tasks done by humans.
Governments and companies are scrambling to keep up with regulation — while users rush to adop

System Prompt

Prompt: write a popular article about LLM that would have a high click through rate

This already takes my people focused mindset and the idea I’ve preivously explored of AI replacing effort, not people, per se.

AI Isn’t Taking Your Job—It’s Taking Your Workflow
Think of LLMs as interns on steroids. They can summarize 100 pages in seconds, generate legal briefs, write ad campaigns, and even debug code faster than most junior engineers. That’s not the future. That’s now.
Companies are quietly replacing entire teams with LLM-powered tools. Productivity is skyrocketing. But so is anxiety.
“We didn’t fire anyone,” one startup founder told me. “We just didn’t need to hire anyone.”

Projects and POV

Prompt: write a popular article about LLM that would have a high click through rate

Here you can see it access recent topics I’ve explored, and it is much closer to my conversational writing style. it has also incorporated my opinions on the way conceptualizing models affects how you interact with them and their output quality.

“You’re Not Talking to a Robot. You’re Talking to 175 Billion Trained Gut Feelings.”
What Large Language Models Really Are—and Why Most People Are Using Them Wrong
If you’ve ever yelled at ChatGPT for sounding too generic, this article is for you.
Right now, millions of people are trying to get LLMs like GPT-4 or Claude to “sound like them,” to write better code, or to make strategic decisions. And most of them are failing—not because the models are bad, but because we’re thinking about them the wrong way.

Projects, POV, System Prompt, Multi-shot

Prompt: Explore an article about LLMs and how currently integrating them to own small segments in strict automation flows is more powerful than pursuing a 1 prompt complex agent

Here we’re much closer. It explores the details around an opinion that is uniquely mine. It touches how the technical details play a part. And it uses points I’ve made previously to explore my personal prescriptive bounds for effectively using GenAI.

You Don’t Need a Smart Agent—You Need a Sharp Tool
Why embedding LLMs into structured flows beats chasing the 1-prompt AI unicorn
There’s a seductive idea floating around the generative AI world:
“Just ask it once. Let the agent figure the rest out.”
It’s the holy grail of prompt engineering. One master prompt, one orchestration brain, infinite downstream automation.
But if you’ve tried to build anything real with that approach—especially in high-stakes environments like healthcare, finance, or enterprise consulting—you’ve probably run into the same thing the rest of us have:
Hallucinations.
Inconsistency.
Execution plans that fall apart on edge cases.
And hours spent debugging the AI’s “clever” decisions.
The Myth of the All-Knowing Agent
The idea that one complex agent can understand your goals, craft an optimal plan, delegate tasks, manage APIs, and handle real-world variability is an oversimplification of what LLMs are actually good at.
And ironically, it underuses the true power of the model.
LLMs don’t think.
They complete.
They pattern-match based on a giant latent space of human text.
So while they can absolutely simulate “agency” in bounded environments, they are not yet robust general-purpose agents. They make great sprinters for local reasoning, not marathon runners across entire pipelines.