Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We can use AI even if it hallucinates if the chances of it getting that thing wrong is less than a human. Like an AI tool getting 1 wrong medicines every 10,000 whereas if the humans get 5 wrong out of 10,000. AI is better for that.😊
youtube 2023-09-21T21:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugxtjtd8Ogk0IQDJQZB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz5ut0j8DFFy9nPOwx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxdYP2xlVux9flx7jF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy7vgR-p8MYmBBUWwl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw4_yWGYU0hAbfLHNl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxYKN3taMDM76tUsrB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwv5cPZNFb-viV_P8t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxZp4xIjZYWgaCLAT14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwMCthk8s16qkR6v694AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"mixed"}, {"id":"ytc_UgzL8QOryhPAOr5cHt14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]