Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I believe we should not call LML giving wrong answers "hallucination". Following the opinion of psychology podcaster Dr. Honda, for an entity to have hallucination it needs to have a consciousness, a mind (and something comparable to dreams) first. Complex things, that the LML's evidently do not possess. Furthermore following my own opinion as a current undergrad student of English linguistics, I believe that by using this word, we might (sub)consciously grant the LML's the baseline properties to have hallucinations aka. a mind. In the long run, that might add to people overestimating the skills that this generation of AI has. Also, it covers up the LML's great rate of misinformation by not calling them what they are: wrong information.
youtube 2026-01-09T12:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwYHWbqZ54ejGxUq-x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz38yoNwCGprM9Gr3R4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz-hK1LOR8_MRDn6Lx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzqhyYrSJZgq10c9mp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyuJcq3hbENEtLnvyl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyTtnbsDrCRj52umzZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxKjiXlPpsmKLuOdGp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyvLH7rbAIn3V3ImIh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw72l4Fqx5K8MOcuTV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyr-Dl4q-EPkMB37-F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]