Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
if AI's are so smart, they would already have told us that Mars can never be terraformed to become earth-like. It would already have told us that humanity is on the course of trashing the only planet where humans can reside, leading to the great anthropocene mass extinction. I would have told us that no political party is presenting voters with a way out. It would have told us that the UNIVERSE is now presenting humanity with a ultimatum, like the ET's in the movies "The Day the Earth Stood Still", one that transcends all boundaries of space and time, stating that our days are numbered if humanity does not change. You can take that to the bank!
youtube AI Governance 2025-07-17T17:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxiUdNPCFp8AM1O8Kh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxFWeC22fPq3Qn4XbR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyEc4Q3t7u2lHCmLYh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxxM3wM6qmx1c1BxUh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyotiU_Ps9wq5PO3kR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyn0v0w6I3Y3y3DqSN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzvWuQ3WgPm44mmrY94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwAZjaVqqwqpcUno7x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz328f_VwAUCwoPkzB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgywnAP2DPq1hAhTabF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"outrage"} ]