Stop making AI slop for Social Media clout

In late January 2026 while browsing LinkedIn, a post relating to “Not all Black is Black” caught my attention. The text within the post itself was brief, but there was an infographic along with it that I thought was quite useful.

After looking at the infographic, I felt that the information was useful and decided to repost this to my followers on LinkedIn, particularly as Black vs Rich Black vs Registration is an ongoing issue relating to offset full-colour printing.

Now I’m not naïve. I’m used to AI generated infographics that were absolute slop, containing incomprehensible words and graphics that make no sense. I did look at the images in this infographic and – in my own opinion – felt that they were AI generated… but largely matched the context of the text they were in. That said, I’d examined the text and couldn’t find any AI artefacted text or hallucinated text, and what was being said made sense. It was my opinion that this infographic had human input, but used AI to generate the graphics, so was willing to let it go.

History repeating?

Days later, I saw a similar graphic that was reposted by an industry colleague, but this related to “Not all presses are the same”, again with an infographic that looked strangely familiar. Noting the three column structure and graphics not far from the masthead, my heart started to sink a little bit… “is there some new AI kit that will generate detailed infographics?”. Here is a redacted infographic, as I’ve removed the creator’s branding.

However, unlike the first infographic that had neat illustrations and not a typo to be seen, this had several AI type hallucinations and typographical errors.

Also unlike the Black vs Rich Black infographic, the drawings didn’t make sense in context. Putting aside the art direction of the three main graphics under the masthead, many of the icons didn’t work for me, such as:

  • Under the Digital Printing headline, what looks like a bean-bag surrounded by a chasing circle of cyan, magenta and yellow.
  • Above the bullet point Demands Exact Color, a collection of dots within a blue circle, but this doesn’t really represent the bullet point that is being explained.
  • Next to the Press Type Matters headline, a head silhouette with an anti-sign through it; and
  • Next to the Get Advice headline, another head silhouette but with over-the-ear headphones… but within the head of the silhouette.

With this in mind, I reached out to the author to say that while I appreciated the theme of the infographic, the elements of it needed work. The author did respond and informed me that the image was “definitely AI generated”.

AI generated, you say?

Being told this was AI generated, I went back to the Black vs Rich Black infographic that I’d previously shared and had a much closer look… and in the bottom right hand corner found a graphic that stated the following words:

Notebook LM

Now, if you’re not familiar with these words, Notebook LM is a Google-run website that will take sources of information and distill it into infographics or other presentations. Check out this video to see it in action.

Now, let me be clear – I’m not here to dunk on the author of either post! In fact, I’ve removed their details from the graphics posted here, and do not want the creators harassed, brigaded or bullied.

With all this now known, I will stand by my repost of the “Not all Black is Black” post as the information within the graphic is still relevant. However, I will be much more vigilant when I do repost content from others.

So when is AI generated content acceptable as useful information?

Unfortunately, this is a subjective answer. In mid 2025, ABC Australia’s “Gruen” TV show contained a segment with their opinions on the present and future use of AI in the creative advertising space. During the segment, the show featured one of the first AI generated visual footage commercials. Since that advertisement aired, AI generated visual (and audio) footage for not just the internet but free-to-air TV is now much more common.

Having watched the article, I can see their points. I’ve had ideas for entertaining videos that I’ve wanted to film and upload to Youtube, but the creation requires:

  • video editing skills I don’t have;
  • footage I don’t have; or would take a lot of resources to film;
  • on-screen talent; and
  • a budget and a film making crew.

Sure, I could go to a generative AI site that produces video, enter my prompts and have a video pumped out, but as a creator I wouldn’t get the satisfaction of saying “I made this”. I could use it as part of a pitch to show a production company the idea to see if they want to fund a conventionally filmed version, but I wouldn’t want to just take the prompted video, upload it and call it a day.

I’m sure there are plenty of legitimate reasons to use generative AI in the creative space. Sadly, there are many people out there who are using generative AI for nefarious purposes, and the only real barrier to entry for them is cost. The law isn’t keeping up with technology, and many test-cases about the use of AI are still before the courts.

But in the context of posting infographics to LinkedIn, I’m now forced to ask the question – was this infographic posted to LinkedIn for the purpose of actually being helpful, or is it to boost that particular page’s clout?

Why am I writing this post?

First and foremost, these days we must be on guard 24/7 when it comes to questioning what we see and hear. No longer can we rely on a sound bite, a photo or found footage… and that takes a lot of mental effort and can be draining.

When generative AI was first introduced, being able to tell the difference between human generated content and machine generated content was fairly simple – count the fingers, look at the overly saturated colours, look for the nonsense words, and if there is a lot of text, look for lots of bullets (particularly with nearby emojis), em-dashes, examples provided always in sets of three, and phrases that are in the style of “it’s not this, it’s this…”

Since the wider introduction of generative AI and more modern developments, being able to tell the difference has become much more intensive and forensic.

What really opened my eyes to how convincing generative AI art can be is the video from Matt Rose on Youtube where he demonstrates footage or photographs that – on their face – appear to be legitimate… until closer examination is taken and then it is obvious that the content is artificial.

But Cole, you’re a hypocrite – you use AI!

I want to be very clear with every one of my readers here about how I use AI.

Text

All of the text in the articles for Colecandoo was written by me. No use of generative AI or any LLM was used in the preparation of the text for the articles.

Graphics

So far as the graphics are concerned, I have used generative AI to generate a couple of graphics used on the Colecandoo site, but can list what they were here:

  • The thumbnail for the InDesign’s MathML revisited article, and this was done using WordPress’ own AI.
  • The graphic of two people arguing over a map in the Instagram reel/tiktok short “terms used in printing – paper towns” and this was created with Adobe Firefly.

Scripts

The scripts offered on my website are either my work or collaborations/assistance from other scripters.

Regular readers of Colecandoo will be familiar with two articles I’d written concerning the Adobe InDesign plug-in Omata MATE – a plug-in that links to various AI/LLM models to generate InDesign scripts or perform AI based tasks on an open file. I still use this plug-in but – at least at the time of writing – haven’t submitted any Omata MATE generated scripts on my site.

Readers may not be aware that sites that host scripts usually receive requests about the scripts such as bug reports, update/feature requests, and general support (e.g. how do I get this script to…). For scripts I’ve written, that comes with the territory. However, for scripts I’ve ‘vibe-coded’ from Omata MATE, that is much more difficult and come with its own concerns, so I’ve chosen not to do that at the time of writing.

Prepress purposes

In my daily work, my use of Generative AI is to add bleed to artwork, whether that be with Photoshop or Illustrator (Sorry InDesign – your Generative Expand feature still needs work!). I don’t the creative side of the generative AI, not even to make a logo for a mockup.

But Cole, you’re still a hypocrite! You liked – and shared – the post!

As I said earlier in this article – I stand by my repost of the “Not all Black is Black” post as the information within the graphic is still relevant. That said, the next time I see a “helpful infographic” on any social media, I’m definitely going to pause and pay more attention.

Last thoughts on the issue

Unfortunately, AI is here to stay. Conduct a straw poll of who is in favour of being on the receiving end of generative AI and chances are that most people won’t want it… but the trouble is it is getting harder to spot the difference, and it’s likely we’ve been surrounded by AI content that we couldn’t discern from genuine content.

However, seeing AI content on LinkedIn that is masquerading as ‘useful advice’ is disingenuous. Now when I see content like this, I’m not going to think “someone has curated this useful information into a digestible format and made a handy infographic – they must be really helpful and knowledgable”… instead I’m going to think “here’s a lazy slop merchant pumping out more guff for clout – call them out and block them”.

If content is created by AI sources such as NotebookLM, at least declare that upfront, and don’t try to pass it off as “look what I made!”. NotebookLM is intended to take lots of information and distill it into a more digestible form, but it’s meant for a person’s own notes rather than a well-curated go-to resource.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.