ChatGPT, and large language models (LLM) in general, have become a growing source of information. The caveat is that their responses need to be assessed before being used, so that the emphasis moves from efforts to generate information to efforts to assess the quality of information.
In this talk we move the attention, yet again, to the design of prompts used to learn from LLM platforms. We apply JMP to implement methods of statistically designed experiments to set up a sequence of prompts and use an analysis of responses derived from text analytic methods such, as latent semantic analysis. We then apply generaralized regression to model the effect of the factors used in the prompt on the topic scores derived from Text Explorer.
It is remarkable how JMP provides the ability to smoothly combine the design of experiments with text analytics and penalized regression so as to enhance the learning from LLM. This approach can be applied in any application domain and, from that perspective, is widely generalizable.
Presenters
Schedule
10:15-11:00
Location: Nettuno 1
Skill level
- Beginner
- Intermediate
- Advanced