Artificial Intelligence is all the rage these days, with ChatGPT being more popular than Taylor Swift.
A pressure point for AI interfaces–whether it be image, voice, or text–is content deemed “inapproriate” or “obscene.” For example, if you use Midjourney, it won’t take long to encounter their list of banned words, often words that could easily be troublesome in one context but innocuous in another. (For example, “blood” was previously banned, but I used it successfully today for “blood-red moon.” Midjourney is evolving.) Having such controls is not a bad thing–in using Dall-E, for example, some images returned were, to my eye, infused with racist tropes.
Which leads to the point of this post: I have been using murph.ai, an AI voice generator, on a project that sets canonical poems to music (confession: it’s a weird, quirky project). I like murph.ai–the interface allows you to play with pitch, speed, and even the style of voice. More importantly, their voices sound better than I do, whether singing or talking (trust me), and it allows me to add variety and polyvocality to the larger project.
So what’s the problem? The last poem I selected was Allen Ginsberg’s “Howl.” If you know this poem, it is a supermarket of obscenities! When Howl was published in book form, it was printed in London. Upon shipment and clearing customs in the US, then subsequent sale at City Lights Books in San Francisco, both Lawrence Ferlinghetti and the store’s manager, Shigeyoshi Murao, were arrested and put on trial for “obscenity” in The People of the State of California vs. Lawrence Felinghetti. Ferlinghetti was cleared, “Howl” became one of America’s most famous poems, City Lights Books is still open and legendary, and there was a movie made about the trial in 2010 (starring James Franco).
Today, I received the following email from murph.ai:
Thank you for using Murf’s platform for your content creation needs.
This email is regarding one or more of your projects….that violates our Terms of Service concerning the usage of our platform. Our system has detected content that might be inappropriate, profane, defamatory, obscene, or indecent in nature.
In the interest of protecting the rights and integrity of our voice actor partner community, our Terms of Service restricts Murf Studio users from creating such content. Please refer to our Terms of Service to know more about our content policies.
To the best of your knowledge, if your script includes any words or phrases which violate our Terms of Service, we request you to remove such content from Murf Studio projects.
However, If you believe that your project does not contain any inappropriate words or phrases and our system has incorrectly tagged this, kindly contact us at email@example.com.
If you have already made changes to your content, kindly ignore this mail.
We apologize for any inconvenience caused and thank you for your continued support in the journey.
Email, March 18th, 2023
Customer Success Team
History repeats itself, or if that phrase is too cliche, the tools that have people frothing at the mouth for an early future are struggling to accept settled precedent of the past.
When it comes to AI, we often worry about abuses with very-real dangers involved. However, we don’t talk so much (yet) about AI’s connection to, say, free speech. To be clear, I think this email from murph.ai is good and totally appropriate. Their program/algorithm searches for specific words and murph.ai let me know about it, leaving the door open to discussion. Their email is polite and, frankly, correct–the project I created does include those things, which on the surface make me sound (via Allen Ginsberg) bad.
Still, when you take the statement: To the best of your knowledge, if your script includes any words or phrases which violate our Terms of Service, we request you to remove such content from Murf Studio projects… this is where the problem arises.
The words and phrases are correctly tagged! But the project itself, the poem”Howl,” was ruled in 1957 to not be obscene and to be of “redeeming social importance.” Am I supposed to email murph.ai back and say, “How dare you! This project is of redeeming social importance! See established case law!” No. Of course not. But, we have an exposed limitation that carries over to other AI interfaces, pointing to a weakness that could negatively affect content creators: the focus on “inappropriate” or “banned” single words or phrases that do not account for larger context.
That being said, I 100% see the value of such monitoring and maybe this makes me hypocritical in that I only accept certain kinds of free speech. “Howl” has been, since before my birth, pre-approved as being important and significant, and that was reinforced during my education where “Howl” and other obscene wonders transformed my life for the better. (Another confession: I like obscenity, especially in the form of swearing.) But there are many more “obscenities” to come, and all without card-carrying approval.
But here we are, innovative disrupter flex-o-vators impatient for 2057 when the new tools can still get hung up on 1957.
I don’t know what this means, but this I can guarantee: there will be many more “Howl” trials, where the object that was once a book or sheets of paper is replaced by the algorithms and their renderings of our desires.
Here’s what I wrote to Murf.ai:
Dear Murf.ai Team,
You are correct: the project does contain the type of language you describe. The project is the text of Allen Ginsberg’s poem “Howl,” of which I wanted a voice reading that I could listen to and use (rather than audio of Ginsberg himself reading the poem). I guess it’s ironic that this poem was the subject of an obscenity trial in 1957–that said, should I delete the project anyway? I am happy to do so, no problem. No ill will intended; I just wanted a different reading of a canonical poem (for a project I am working on), and the Muph.ai voices are excellent for such things.
Chuck RybakEmail, March 19th, 2023
[Note: some basic information for this post was gathered from James Sederberg’s post “The Howl Obscenity Trial” via an “Attribution-NonCommercial-ShareAlike 3.0” license, and thus this post can be used under the same terms.]