Discussion about this post

User's avatar
LairBob's avatar

This is fantastic, and it puts a finger on the key issue — what I’ve come to call “The Great Stultification”.

What happens when the models are basically trained on nothing but model output?

It’s not going to be pretty, but we’re going to eventually need to some up with some sort of “earnest of human provenance” — the number of steps away a given assertion is from being empirically-provable as human-generated (like through a blockchain) the more suspect it’s going to be.

Just sayin’. ;)

No posts

Ready for more?