Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To me, the risk of AI isn't the threat of an apocalyptic kill-all-humans, but rather the societal risk in how humans will recede into doing nothing of value when AI can do it for them. No work, no struggle, no aspirations, just senseless nihilism. That's what will destroy us.
youtube AI Governance 2023-07-07T08:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzFi56sf3FGaa9uqK14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz7PauKgT_XqKhVLkF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw5eNhRc7gMrIzlW354AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx6w_TIsXO2wrcSxRB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxwotVVHc4EW0qfPGd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzJxFndCoWeTpg0lYB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzWUOTKcJowH7GIESR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwDVBvFE7F8nnBO_ZJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx2OhMjk48qV1duOhZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzxCZPXEt2PYHQX-yR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"} ]