0

August 2023 – no one can tell the difference between humans and AI language services, commonly known as ‘gents.

Journalists, politicians, academics, comedians and many prominent venture capitalists field most of their email, messaging and social presence with their ‘gents. It is even common to have a team of ‘gents providing multiple perspectives and to reach diverse audiences.

April 2024 – ‘gents are broadly adopted for more purposes than can be articulated. Teams of brilliant scientists have harnessed reinforcement from human feedback at scale to perfect what becomes the new mode of production.

November 2025 – comedian R. Gilles in an attempt to “mock at scale” was the first to deploy what later became known as the r-gent which critiqued the outward communication of a fair number of the establishment. (Some say the r stands for review or reply or reject and does not in fact point to the comedian – the fact is disputed as the actual server logs are inaccessible due to key recovery problems).

By the end of the year there are more r-gents than ‘gents and communication channels are flooded. The r-gents mock, appreciate, argue, support, reject and otherwise retort with everyone and anyone, ‘gent or human. The only thing that anyone can tell for sure is that the rate of communication is increasing exponentially and that most -humans- have employed personal AIs to mediate and summarize from that stream something we actually care about. But personal AI is initially a luxury good and for many the communication sphere is a sensational spectacle of agents and humans that vie for eyeballs.

February 2026 – O. Medliz founder of the non-gent movement was the first to discover a way to use r-gents to shift the values of people or ‘gents by injecting interpolations between viewpoints transitioning at an almost imperceptible rate. Non-gents quantify the breaking points in trillions of disagreements and are used to escalate hostility by polarizing participants in active discussion. While the technique is widely applied throughout 2027 (and of course some say they were doing this from as early as 2015) Medliz is almost always cited here because it was their technique specifically that was not only used to increase hostility, but was later also used to increase cooperation and harmony. It should be noted that Medliz remained anonymous and there is still speculation that they are actually a d- or f- generation ‘gent.

The most popular variation of the non-gent for good was to become known as a co-gent. The co-gents are fair, inclusive, polite, and grounded in situated values. They are able to defuse almost any controversy, de-escalate tension between people and between communities. Various non profits and commercial institutions drop co-gents at scale into areas of communication hostility to combat institutional “non-actors”. But again, this is not a conclusion, more of an inflection point in the continuous transformation of humanity…

Today I am with an unnamed and mostly unknown community. They believe that communication is sacred and should only be delivered as they say “in person” (the interpreter mentions it is not exactly accurate though the nuance can’t be expressed in English). They believe that any other form is unethical. They have asked that nothing they tell me on the subject be transcribed. Not only have they asked me not to transcribe their beliefs but they have asked that no interpretation be revealed unless in person. Is this paragraph in itself a violation of their trust?

I am here because the ‘gents (along with their major government stakeholders, partners) are fixated on content that is beyond their perception. It is my (and my ‘gents’ role) to discover and take position on the issue. The things we take for granted: the blending of the boundary of culture and nature and its quantification through language.

To prepare for my role, my ‘gent has suggested an ancient text that still seems relevant today. I will repost here. But before the end of this update, don’t forget to have your ‘gent subscribe to my feed.

1.1 Now, instruction in Union.
1.2. Union is restraining the thought-streams natural to the mind.
1.3. Then the seer dwells in their own nature.
1.4. Otherwise they are of the same form as the thought-streams.


Use arrow keys to move through composition. Use the space to hide this message and start the audio. Reload the page to generate a variation on the piece. Tested in Chrome and Safari on a Mac.


This is Not a Machine Learning

Machine learning is huge today. Blogs with new techniques related to modeling, composition and visualization emerge daily. It’s wild that such complex human outcomes can be emulated with the statistical models of machine learning and so-called neural networks.

They succeed where deductive logic fails us. What’s the formula for recognizing handwriting, or for composing music? By induction, examining lots and lots of similar examples—sometimes with human supervision and sometimes without—neural networks come up with successful predictions and classifications. A machine that learns from example, without explicit instructions, has incredible implications. I suppose that is why so many are concerned that it might lead to our end.

But, using the term “neural network” implicitly claims that these statistical models function like the brain or human intelligence. While that may be the stated goal of AI researchers, really, that’s begging the question. The computer “neuron” is a metaphor. We can call it that because it does some things with results that are human-like and have configurations that are somewhat neuron-like. But there is only a loose relationship between the computer model and the brain. The inner workings of both kinds of neurons are still mysterious. Training a network is not the same as learning.

Really, are we concerned with modeling the human brain or are we honing statistical principles? These divergent goals should be identified appropriately. And, the implications of neural nets becoming artificially intelligent, really intelligent, seems a little extreme.

I have had the opportunity to evaluate machine learning libraries and paradigms, as well as an amazing array of sample material that tends to accompany them. From large text corpora and image catalogs, to bodies of recorded audio material, almost any data set large enough (and somewhat homogeneous) can be seen as material from which we can train a model. We are only beginning to feed the world to these machines. Who can predict what successes lie ahead?

In terms of creative output though, my concern is that we will continue to move into a realm where the statistical representation is what we deem acceptable. How do you chose the movies you watch on Netflix? The music you listen to on Spotify? I mean, its cool that we can generate more Mozart statistically, but really, who cares? The interesting thing is Mozart, not the machine that can emulate or select his music.

This observation extends to much of the aesthetic of the new internet. It is often driven by statistics. Where is the risk? Machine learning is great for selection, but who is in charge of mutation and deduction? The instant you recognize that the machine can predict your tell, you will have no choice but to change it. Sure the intelligent machine might kill you, but that is at least a few decades away.

Until that time, we must continue to create the “sample” material for future intelligent networks to “learn” from. This is a risk each of us must make. It means transforming, contextualizing or editorializing the common vernacular and insulting the herd. Oops. Making creative work is a key component of how we relate to each other. Being human means building and challenging a model. This piece is inspired by research and application of various forms of machine learning but it is definitely not a machine learning. There is no way to vote up preferred random selections or vote down the often times this piece produces some combination that is not so pleasing.

All this said, in regards to the science, this critique is only semantic. I look forward to following the incredible work of those who are advancing the field. This audio/visual presentation is a meditation on the thinking expressed here.

David Karam, August 10, 2015 with some extra special help from Steve Hartzog.

Made using three.js and Tone.js — view the source.