I breathed a sigh of relief recently and also felt a tad smug if I’m being totally honest. Anyone who has followed me for a while knows I’ve had a bit to say about the rapid emergence of AI tools, its inherent biases being perpetuated and what it could mean for the whole of humanity, not just my future prospects as a writer.

I recently spoke to digital agency contact I met at a networking event last year to see how things were going. They are relatively new, and since AI is also making its mark in the web development space they’re in, I asked how they’d been affected.

It helped us speed up some of our processes. Plus, we might’ve found a way to build a service around ChatGPT. However, we are yet to test how the market responds to that, came the reply.

He went on to say, I do see a slight change where some agencies have turned back to using professional copywriters again. I have met agency owners who lost clients because of their overreliance on AI, too. We were outsourcing some of our SEO work to one Perth-based agency, and they went full ChatGPT on their blog content. We ended up having a not-so-pleasant conversation with our angry client and had to stop outsourcing to them.

That sentiment was reiterated in another conversation the following week. A Brisbane-based agency contacted me about my availability. They want a pool of copywriters again because their AI content isn’t performing.

Without Real World Data, There Are Real World Implications

David Quantick rightly predicted in 1986 that pop will eat itself and if you listen to popular music you’ll know it has been recycled, upcycled, uploaded, downloaded, downed and dissected. It appears in our haste to adopt faster and easier may have bitten us on the bum faster than many anticipated and in a similar fashion.

When AI models are trained on too much synthetic and not enough real-world data, it creates a feedback loop, leading to models losing touch with logic and generating nonsensical outputs. It quite literally goes mad and self devours and has the apt acronym of Model Autophagy Disorder.

Businesses relying on AI models for marketing, customer service, or other tasks that haven’t been trained on data representative of the real world can generate inaccurate or misleading results, leaving businesses wasting money on a poor ROI and potentially needing PR-mode to repair a damaged reputation.

This also raises ethical and legal concerns where AI models are being used in healthcare spaces, particularly in domains like diagnostics and personalised medicine where misinterpretation of data and biased predictions could create ineffective or dangerous treatments or delayed or missed diagnosis.

These groundbreaking language models still require pure human input despite all the concern about its capacity to supplant us.

While real world data is not without inaccuracies and biases, in order to train new AI models effectively, companies need data that’s uncorrupted by synthetically created information, so filtering has become a whole research and development area. Engineers for software developers who value quality outputs now have the fun task of manually sorting data to ensure AI is not being trained on debased, artificially generated information. 

What has been your experience with using AI in your business? I’d love to know, email me at creative at scribecartel dot com and share your story with me.

If you’re not already, I encourage you to learn more about the risks and benefits of AI and responsible development and data practices through the links referenced in this piece.

Other articles you might like
🌶️Integrity is not a marketing strategy🌶️
🥗Flesh Out The Flavour. Why Your Readers Deserve A Gourmet Content Experience 🥗