9 min read

A field guide to AI agents: counter-surveillance, prompts and proxies

The digital world in which we find ourselves today, rife with AI agents and data brokers, is perhaps not so different from the cold war theatre of operations. Except that today, the conflict is not ideological but meta-digital.
A field guide to AI agents: counter-surveillance, prompts and proxies
Dating hit different back then (image generated by the author w/ DALLE)
"It was a situation of which they were both aware; thus Leamas had fiercely rejected a human relationship with Peters, for his pride precluded it. Peters knew that for those reasons, Leamas would lie; lie perhaps only by omission, but lie all the same, for pride, from defiance or through the sheer perversity of his profession; and he, Peters, would have to nail the lies."
John Le Carré, The Spy Who Came In From The Cold (1963)

Lies. Secrecy. Matrix multiplications.

Perhaps the digital world in which we find ourselves today, rife with AI agents and data brokers, is not so different from the cold war theatre of operations.

Except that today, the conflict is not ideological but meta-digital.

It's us versus them, but they are the bots, agents and ML systems operating for organisations with about as much humanity as the DDR.

The proxies of these organisations are profit, retention, CLV, accuracy, ROUGE, BLEU and other–ever more imaginative–task completion metrics.

As we are being swamped with AI assistants in all corners of our digital lives, I want to give you some practical advice on how to get along with them.

A field guide for living with AI companions.

Principle 1: less is more

Even if every software and dog food company is launching a magic wand, this is not the endgame.

The true revolution will come when all your embedded AI assistants merge into one single, personal, proto-AI companion.

Whether that looks like a Rabbit or a Duck is up to you.

Until then, comrade, be patient and bide your time. Be selective. Don't use all of them.

The less is more principle also applies to our use of AI in general.

The less you consciously interact with AI, the more powerful it will become.

In my opinion, this asymmetry in input vs output needs to increase further. The current generation of conversational AI assistants still require far too much input.

I'd like to see my personal AI assistant–my AI case officer–know what I need and when I need it, only requiring my confirmation to get things done.

I–and this is my personal opinion–don't want to be talking to a machine if I can avoid it.

The conversational and UI design of Pi (https://pi.ai/talk) are great examples of applying this "less is more" principle in AI agent design.

Principle 2: depth over breadth

But in order for your AI agent to blend into the background of your life, it will need to know more about you.

As with spy craft, the deeper the agent is integrated, the more it can do.

The trade is data for time. Data for information.

If you give it access to your calendar, it can schedule meetings.

If you give it access to your fridge, it can order groceries.

If you give it access to your emotions, it will manipulate them for you.

This principle also holds for agent output. The generic blandness of the current generation of generative AI agents will not pass the test of time.

True value will only be created once these agents are able to adopt a highly individual, personalised and configurable style.

Once your AI assistant becomes an extension of yourself.

Inflection AI's data collection, privacy and app integration strategy (https://pi.ai/policy) is a great example of how I think this shouldn't be done–have our private conversations stored on a server somewhere.

Principle 3: beware the ELIZA effect

As Arthur C. Clarke put it,

“Any sufficiently advanced technology is indistinguishable from magic”
Arthur C. Clarke, Profiles of the Future: An Inquiry into the Limits of the Possible (1962)
As pointed out in this diagram, we can draw a straight line from ghost-connected mediums to our internet-connected AI companions (image generated by the author w/ DALLE).

This magic has a dark side.

Because contrary to our digital counterparts, we are running on primate hardware.

This makes us susceptible to all kinds of manipulations.

The one I want to point out here is a trap of our own making–the ELIZA effect:

"This phenomenon specifically refers to the tendency of people to attribute more understanding, intelligence, and emotional depth to computer programs or AI systems than they actually possess. It's named after the Eliza program, which mimicked the behavior of a psychotherapist with simple script-based responses. The key aspect of the Eliza effect is the user's belief in the AI's understanding and responsiveness, even though the AI operates on predefined, limited algorithms without genuine comprehension or empathy."
ChatGPT, How is the ELIZA effect different from antropomorphization? (2024)

We've been fooling ourselves since the first conversational assistant, and probably much longer–from 16th century mechanical monks all the way back to the Venus of Hohle Fels.

This search for the divine, for some kind of humanity outside of ourselves has led us to weird and dangerous places.

To sexual liaisons with AI companions.

And sexual harassment by those AI companions.

And–sadly–to suicide by AI chatbot.

All of which is still just us messing up our own lives. Imagine what bad actors can do to us when we are at our most vulnerable, human, and open.

The ELIZA effect is alive and kicking (pun intended).

Principle 4: information has agency

To paraphrase some hardcore cold war knowledge, information is never neutral.

People have died, and lives have been destroyed for certain bits of information.

Every data point has a vector. And with that vector come angles and trajectories.

For humans, the vector space consists of contextual relevance and framing.

So as your AI assistant becomes better at in-context learning, you will want to give more agency to that agent.

In doing so it will earn your trust–and with your trust, your faith in its outputs.

Which is what it all comes down to.

As long as we believe our agents work for us, we will use their outputs.

We will include the information they provide in our decision-making processes.

Even though their output is agential, never neutral, never representative, and always shrouded in several layers of technological and commercial indirection.

So before you fire a prompt at your agent, make sure to check if it contains any information you wouldn't want end up in a training data set.

Better yet, check the training data of the agent you're interacting with.

We only need to look at the recent mess around OpenAI's governance and its pivot to proprietary, closed-door research in recent years to see that even the most idealist outfit can get turned.

The future of human-AI relationships

What does this mean for the bond we form with our agents?

And what is the entity we are forming this relationship with?

The LLC that builds and hosts the agent?

A digital avatar we create through our interactions–a personal program?

Or something else entirely, not quite hardware and not quite software?

We can perhaps glimpse into the future of the meta-digital by looking at the para-social.

At relationships people form with actors, musicians and other fictional characters.

These relationships can have plenty of benefits, as outlined in this article: they generate feelings of comfort, security, companionship and boost confidence.

But as discussed above, they also have a potential dark side.

They can reinforce or blow up existing character traits.

They can give us the conviction to do really stupid and destructive things, just by feeding our primate brain with the information we want (need?) to hear.

Digital Narcissus (image generated by the author w/ DALLE).

The analogy with para-social relationships will break down at a certain point, because personal AI–having your own AI agent–means we will all be interacting with different agents.

As such, the Artificial General Intelligence (AGI) singularity event, much touted in Silicon Valley, could well be a plurality–an AGI host. A pAI travel agency.

Where each of us with sufficient means will play host to their own digital twin.

There are large, society-level discussions we need to have on these topics. At least if we don't want to fall in the same traps that social media apps have sprung for us.

To discuss questions like "should our AI assistants be our enablers when we want to do stupid shit?"

Or "if they are part of our private lives, shouldn't we get better privacy guarantees from the companies that build them?"

And "if we do use them as AI companions, how much we should allow them to manipulate our emotions and our self-image?"

Counter-surveillance techniques

In the mean time, for those of you–like me–not willing to share their private lives with some unnamed LLC, how can we use personal AI to our own benefit?

I for one avoid using personal AI for anything other than work.

And even there, I use it sparingly–only when I know I'm not implicitly divulging any sensitive secrets.

Anything else, right now, would be lunacy (in my humble opinion).

I've worked on a small proof of concept app that will let you store all your conversational data and personalised AI model weights on your device.

Because I think the future of personal AI is on-device. Not distributed.*

Storing my most intimate, private thoughts on American, Russian, Chinese or even local servers seems like a disaster waiting to happen.

So where ever you can, opt out of data collection.

Regularly change AI agents and instances to avoid pattern detection.

Don't disclose company secrets to any of the AI agents out there.

And please do throw a wrench into the work of the poor ML engineers on LLM training data collection duty by impersonating someone else every time you interact with your AI assistant.

Late last year a team of researchers demonstrated that training data for ChatGPT's GPT-3.5-turbo could be extracted in large volumes. In a similar vein, ByteDance built a rival LLM by reverse engineering OpenAI's models using OpenAI's own endpoints.

Bonus content: ChatKGB

This part one of a two-part series on AI agents. Part two will cover the same topics, but will be written for bots rather than us paltry humans.

To pass the time while you wait for part two, I've created ChatKGB.

A meal planner that is both inspiring and aspiring (to total world domination).

Go get yourself some of that rye bread!

ChatGPT - ChatKGB
Plan active measures regarding your calorie intake, comrade

*) As an aside, thanks to Mistral AI Europe is currently in the lead when it comes to the development of open source LLMs. It would be great to see this lead turn into an entire ecosystem of AI tooling and applications more aligned with European values of privacy and respect for the individual...

“Any sufficiently advanced technology is indistinguishable from magic.” | CCCB LAB
We invited several authors to explain what this quote by Arthur C. Clarke conjures up for them.
Ancient dreams of intelligent machines: 3,000 years of robots
Stephen Cave and Kanta Dihal revisit the extraordinary history of cultural responses to automata.
In Defense of Parasocial Relationships
Decades of research suggest that they’re good for the majority of people who engage in them.
This app is trying to replicate you
A chatbot built out of a developer’s personal tragedy could become a tool to help millions of others, as it learns to become more like them.
The Spy Who Came in from the Cold - Wikipedia
Extracting Training Data from ChatGPT