Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
LLMs hallucinate because of their architecture, not because of the training... This discussion is more science fiction than anything else. I doubt LLMs are even the right approach to go towards AGI. LLMs may be able to simulate one part of the brain well, but it's just so much more complex... Where's the next technical breakthrough after 2017 transformers?
youtube AI Moral Status 2025-11-03T05:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzn_IKre8Q3Ac-ZgkF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzOlJH3MRZNJZs6Sap4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzWoSgkoR5BxrplSTR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxYTOoUHXnHZz6_Hht4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyvUDlGQfN8ZzJJWwJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzMgySiEz2yhF51O854AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwFE-FHa_sLG-vXkg14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzFQ_vWNd2gyl7XkFd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgySVE2ZBVNUJ9ALttl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwXywL2CE5FZbPOO954AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]