Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1:16:50 yeah, YOU can't build AI that can escape blah blah.... bu WE can. c'mon man, they probably full throttle on the AGI thing. and the analogy to the "running to the cliff" to my opinion is not correct here.... cuz from one point of view it is a cliff, but from another this is the holy grail, and.... only one can hold the holly grail. AGI is not a tool and not a technology, it is a creature.... yes you can stop the development but is seems that the cosmos (or the simulation) follows very strict rule: "survive/be" which is equivalent to "multiply" , and the by product of that is "need to know more". like the gorillas didn't have a choice for humans not to evolve, we too don't have a choice.... we can't stop it, even if you "press that button" it would be too local, higher intelligence is coming.
youtube AI Governance 2025-12-07T21:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwvTSRD5trZevfIYnB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzQutbG2tNNGIK3sHF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz59csBDz9waO9xi4l4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy1awct-k9UerWIvdd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzMGKwHmKTWLYY2cZt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxhEd0gncx35A3RDw54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwNpigjrWr-iFxl4wJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzIoICz2ds5OYh4MKp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxuRG2TM7bgEB2_wu14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwyTZZrvhLxnWRxq7B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]