HeadlinesBriefing favicon HeadlinesBriefing.com

LLMs Turn Engineering to Witchcraft, Warns Author

Hacker News •
×

A recent long-form essay dissects the growing enthusiasm for AI coworkers, suggesting that current trends might push software development toward something resembling witchcraft rather than formal engineering. The author notes that while LLMs can generate sophisticated code from vague prompts, the mechanism lacks the verifiable semantic preservation found in traditional compilers, demanding human oversight for critical systems.

This shift introduces severe risks like automation bias and deskilling, mirroring historical issues observed in factory automation. When humans rely heavily on tools they don't fully understand, their core competence degrades. The piece draws parallels between unreliable AI output and hiring sociopathic employees who lie or sabotage work without intent, citing an Anthropic Claude example gone awry.

If widespread adoption occurs, the author fears ML will further concentrate wealth among tech giants, rather than solving societal issues like UBI. While some engineers are already treating LLMs as fickle daemons requiring specific incantations—prompt engineering—the fundamental ambiguity of natural language means correctness cannot be guaranteed by the machine alone.

The potential for a thriving periphery of useful but rickety LLM-generated software exists, similar to how non-engineers use accessible tools like Excel. However, where system integrity matters, human review remains absolutely necessary to manage the inherent chaos of linguistic models.