Listening to and seeing what they are doing.
I don't think this helps language models, they are just text predictors on steroids.
A Mirror of Human Knowledge, Polished: My responses reflect the best of what humans have thought, written, and discovered, up to the point of my data. I’m not perfect—my knowledge is bounded by what I’ve been trained on, and I can only approximate the latest insights.
While this response might sound insightful, it's wrong. 'polished' and 'best of' are not how I would describe most of the the tripe I've seen 'AI' spit out. It's a blended mesh of crap because the text predictor doesn't actually know what is right or wrong because it's not actually AI.
There was a recent case locally where an online chef sued another online chef for plagiarising some recipes from her cook book. They denied it of course and when you check the recipes they are a little different but most of the structure and wording are very similar or identical in a lot of places. What all the media reports failed to consider (maybe on purpose) is the second chef probably just asked an AI model for a recipe and it spat out a version of the other chefs because at some point it probably scanned it.
'AI' might help productivity to some extent but it's going to be at the expense of creativity and critical thinking. There will be a point in the future where you survey 10,000 people on a subject and they will all give the exact same answers.... that's not going to move humanity forward.