Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It is fundamentally wrong for humanity becoming too lazy to even make decisions. Think about the atrophy in the human brain that got us here, that would happen if it no longer problem-solved or made decisions. It would be evolutionary suicide to become that lazy—EVEN IF AI could be guarranteed to be benign. Think it through.
youtube AI Governance 2023-04-18T08:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzxAfcNv4QfaXCHNQV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwV7rZcm9_JAo9QkIF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzuiBSJWsOWLEHq3Eh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwqdXt2lot9p1pF5Tt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxM1wxil0iGmca4_zh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw1wRYqu3MTgXWMiUJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyKBXVY_SruNdnsvp14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzRvdBLE5u4NVFOtFB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwucyyc9pQVBA3Vexp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgwloJ0NfyO3HYGLuBp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]