Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hinton's nonchalance about super intelligent AI possibly replacing humanity someday disqualifies him from being taken seriously in this debate. This is a moral problem, not just a technical one. There should be a permanent moratorium on AI development until we know how to control it. Doing anything else is grossly irresponsible, and that's putting it mildly.
youtube AI Governance 2024-02-15T18:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw5SNVl_kVXpOfulmR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxbiv6El97zcz5M7Td4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyHhjBJmQYWxz3T5IZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx1O16_3R3_kRfHqzJ4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx9DpAAbjLAHwY1-8l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyVVSmWdn7i3ps9u4J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzolGjrO-quKlSW1ht4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwfmPHN601kYE12HnV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxdg91MmSgWj7bR6TB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxl-vyzCgBAviASQ-V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]