Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem is not AI, the problem is people behind the AI. If an AI is trained on a data set that the number 1 target is to protect humans, then this AI would never turn against humans. An AI does what is trained for...so if humanity is wiped out, it would be the fault of those humans that trained the AI...
youtube AI Governance 2023-07-08T01:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzJrxhdINhcN4vhZpl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzdoJd0XUNF-lN4QR14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzwrBKIeOKvKi22j9l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw_WWGv6gRiUp0aoR54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx-tTFYDCjcLawJo_l4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwwqdGmnGlptuLfcmB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy58-M-Ht23hvMN0eJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugw1pfbdODAPkGwOkix4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzMSKvY4T8h6R7rVh54AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgybagFKcsj1YUn_a6p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]