Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Death to clankers (and this is only half ironical) I asked the google bot, and it told me basically, that if you can not trust the maker of an AI, then you can not trust the information it outputs. It emphazises how the "hallucinations" need human control, at which point it becomes a useless tool...
youtube AI Responsibility 2025-10-09T23:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyLr4Ebn1EXjm4zNfJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzbvzHqkTmbzKW8y6p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwTrIjSSvJp7p2qfAZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwqv8U6u2GZaYN0Wd54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxuArJS41PX6N_zxLB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwJqB_pmEdwqFE2rah4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwNDXSmSKgkvFhhFCV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz9bS5GOMgQjfIKr2p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwHSYKWOa60oSEwz114AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwYyHsBk51JFJTHVPt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]