An interesting paper by Durt et al discusses why ChatGPT and other large language models may seem so good at modelling human thinking. Generative artificial intelligence uses structures and patterns of human language to produce outputs that can be strikingly like human beings. The so-called neural network architecture of ChatGPT, although it can't act independently or make decisions, generates responses based on the patterns and associations found in the vast amount of inputted text. Meaning really has no existence outside of language use and rather results from it, so perhaps we shouldn't be so surprised by the extent to which thinking is guided by patterns. Thinking can be intuitive, even creative, but most of the time we're just pinching ideas from other people.
Discovering what psychiatry’s really like
3 days ago
No comments:
Post a Comment