Technology

ChatGPT no substitute for human effort

LLMs will improve to the point where they make very few errors of fact, but the errors will still be there.

02 May 2024

The thing with technology is to never fully trust it. We should have learned this by now, but we humans are prone to falling in love with our favourite toys – which is fine until it's not. This applies doubly so to AI engines like ChatGPT and its competitors. They're not particularly dangerous when they're being obviously dumb – it's when they're being convincing that we should worry.

We can add safety measures though, right? Sure, but safety experts will tell you that dashboards, alarms and automated tests can actually make failure more likely, not less. Ever since we invented the oil level gauge, it has been periodically failing us, leaving us stranded by the side of the road with a seized engine.

ITWeb Premium

Get 3 months of unlimited access
No credit card. No obligation.

Already a subscriber Log in