6 Comments
User's avatar
Jack Bowman's avatar

I've been amazed at how fast Upwork and Fiverr became the most expensive ways to use AI tools. Even if the person you're working with will be a human, much of the proposal process has been fully automated now, as you've experienced.

AI as a homogenizer is spot on; it says less with more words.

This chart blew my mind. It's comparing the proposals sent to clients on Freelancer.com from before and after ChatGPT's public unveiling. What was on average 75 words is now said in 200. https://jackbowman.substack.com/i/178236437/ai-is-making-us-read-more

I get a lot of pushback when I make posts about how I don't like using AI co-writers or editors. But I think if authors/writers are going to use these tools as polish, then your advice is so valuable here. Rewrite it yourself to preserve your authenticity, and use the prediction machines to find patterns you might have missed, but not to address those patterns directly.

Glad I signed up to read.

Cheers!

Patrick O'Loughlin's avatar

haha well said... I find there to be an uncomfortable gap btw my technical knowledge, AI's ability to convince me it has the technical knowledge, and its ability to actually execute on that knowledge.. sort of the "last mile problem".

Fascinating word count went from 75 to 200 - that's an excellent source for this post.

Yeah - cool - so you're still writing entirely "by hand"? Do you ever bounce ideas with an LLM or use AI at some point during your writing process?

Jack Bowman's avatar

Definitely stealing "last mile problem" haha, that's a great way to put that. Most of us feel that way too, I imagine. It's stopped me from asking questions outside my knowledge domains; I'm never sure if what I get has any real merit.

I actually do like to use ChatGPT and other LLMs (Claude is my 2nd, coming up on 1st) to discuss ideas. It's especially useful at playing devil's advocate, helping me identify why I may be wrong. It's been most helpful to me in workshopping ideas, but I can't stand the way they write, so I never copy+paste outputs.

The phrase "intentionally unoptimized" has been burned into my brain. It's okay if my work isn't perfect, or if I hit every comma, etc. But my work always reads like I'm a human, and I prefer it that way. My biggest fear of using AI gen content outright is the slippery slope of, "well, if this part was AI generated, what else was?" I feel that way especially when I see authors post very clearly zero shot generated images as their profile pictures, etc.

One of the things that's stood out to me recently is what the Silicon Valley folks are calling "taste," or essentially the ability to tell between genuinely insightful content and gobbledygook that looks like insightful content. I've had to remind myself not to eschew AI outputs, but to be cautious about them and only put stock in them when I feel like it was earned via the outputs and the reasoning that went into it.

But yeah, I'm still largely a "by hand" guy who likes to chat with the bots, but not let them into the notepad. I think it's created a good balance for me, but everyone's workflow is unique. I've seen some great work that read to me like 100% human, only to learn that it was 50/50. But I bet those examples take a lot of prompt engineering to render out well.

Patrick O'Loughlin's avatar

gotcha so because it hallucinates, you don't trust it and steer clear of using it outside of your existing knowledge? I have that same issue when asking software-dev problems. But, often, like in Lovable, I can question it based on applying my human simple logic and it will correct some obvious errors although I'm sure I'm missing some too. What's nice about using it for software-dev is that the thing you're building either works or it doesn't, so you can actually test its work. Harder to do with writing.

Using LLMs as a challenger is a great call. I'm looking to build more of this into my feedback app. I agree - it can retort any argument you make, which can help sharpen your argument and get ahead of obvious criticism and blind spots. I'm working on this feedback method called "counterpoint" where the app will basically try to steelman the opposite side of your main arguments. It usually helps me actually gain conviction... or yano, sometimes it blows up the essay haha -- but that's a good thing too.

"Intentionally unoptimized" is a good one. The issue is that LLMs can only optimize for what has come before. They are built of existing knowledge. Therefore, anything new, that's also more true than what exists, will be considered "unoptimized" or even, incorrect, according to the LLM consensus machine. So agreed, you have to be very careful with how you use it. You can essentially boil this down to "taste" as you point out. And to me, it doesn't really matter who writes it, so long as it speaks to your tastes... but we all have a tendency to treat AI as the teacher, as the one with the answers, and overfitting to that, as a creator, is detrimental because you'll never create anything new in that way, since LLMs want to pull you to the middle, to concensus, to what's already come before.

Experimenting is the key for sure. Having your own ethos is more helpful than flying blind. That's cool that you have yours and it's working for you. I'm still trying to find my exact guardrails.

Will G.'s avatar

Love this!