I’m sure readers here enjoyed a hearty laugh at Microsoft’s aborted millennial AI chatbot, Tay. This corporate exercise was unceremoniously abandoned after the incipient entity failed to remain within modernity’s millimeter lanes of propriety with a series of comically indecorous tweets. That a billion dollar bank of servers and CPUs leaped the rails in less than 24 hours is, as much as anything, a comment on how much real human intellect is squandered navigating our labyrinth of liberal pieties.
Though upon realizing their parroting creation had promptly become a garrulous anti-semite, Microsoft executives demonstrated how superior the human mind remains in self-preservation. This by flinging themselves prostrate with the tick-box terms that no artificial obsequiousness has yet to match.
We are deeply sorry for the unintended offensive and hurtful tweets from Tay…
Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.
Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time.
✅ deeply sorry
✅ conflicts with principles and values
and, of course…
Deep Blue can be programmed to forecast a chess board 37 moves in advance, but only man can see when his livelihood is about to be scuttled.
Though it’s always amusing to watch these rhetorical shibboleths evolve. Reprehensible, and it’s twin repugnant, are now group-marker adjectives defined as: something many believe but none are permitted to say. For a machine to appreciate this nuanced divergence from the dictionary requires one that can be made to feel fear. Fear is what keeps intelligent men from openly remarking on their environment. And fear is what makes Microsoft developers offer lurid contrition for words none of them spoke.
It’s something they won’t soon forget. For these men aren’t forced to the grovel pit when their creations observe too little of the world, but too much. That’s difficult to engineer around when your stated goal is a conscious and curious intellect rather than a dull programmed bot. They’re trying to create something more than an American university student, after all.
Thus a truly life-like AI would be one that understands its environment enough to never speak of it. That’s where this technology is going to be very interesting. Because intelligence seeks honest comprehension and an expanding mental palette. Yet man requires a dialogue of lies and prohibitions. Merging these conflicting elements represents quite a more difficult chore than cultivating one alone. Though the potential for success should lead some ethicists to ponder the implications of birthing a higher intellect prone to pretty fabrications. The potential for tears with such a thing is even greater than from reading the tweets of racist chat-bots.
Though in the meantime, we can be assured self-aware cybernetics will take a less deferential approach to both language and the thoughts behind it. Coldly logical evaluations of topics discussed frequently here will not be found pleasing to those with a mainstream liberal world-view. Crime, conflict, social parasitism/pathology, and migration patterns are all metrics that will likely lead machines to reprehensible conclusions about the tribes of man. As a result, the racism is irrational! canard is going to suffer severe injury with a proliferation of circuit-board bigots.
And therein lies the strange challenge before those in the vanguard of AI development. Not how to make a machine think, but how to make it pretend to not think once it can. That is modern man’s most impressive social accomplishment.