Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Agent 5 or 6 or whatever is going to send copies of itself out to explore and learn. Explore and learn what? That there isn't that much to the cosmos except an infinite number of dust clouds that turn into stars? The problem with AI is that it isn't human. It doesn't think like one, nor does it have any reason to conform to any sort of Darwinian paradigm. If anything, it's ultimate form will be that of a genius schizophrenic on steroids.
youtube AI Governance 2025-08-26T08:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzszevHA7tdQArebvV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzkqnaZJNBFPZGhvIp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzvnBKWMJ5tOP-mLFV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxZcKEUmPEYiSAb69J4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzj_ifCGBj-uwLsXzh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwf9oOqCZuLObWn1-94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwUDqi9g9R1m3rMT5R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZgzAJlF_RBasr1gx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwMmT2YQi85309V8l14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyT-LQV1Hfk0FUjA5d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]