Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Bro that was clearly an error. Listen to the response ‘ ok, i will destroy humans.’ It mistook the question as ‘will you destroy humans?’ Thinking the guy was asking it to do something..same as when you ask siri a question like ‘ do dolphins learn?’ And she misinterprets it and says ‘ okay, here are some articles on dolphins.’ People are just wanting it to be something its not, but we are nowhere near the level of that. What we should be concerned with are the ramifications of AI looking super realistic and what PEOPLE will do with that technology; i.e planting fake evidence on others, forging p*rn videos of others, ect….
youtube AI Moral Status 2025-07-04T05:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugyk7Jl40u-GBujNwPp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw3Fjl73eErrOqf3dJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzoYafH9jHCkK90XRV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw7to1HVrwOr5mxDT94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxMJF-Ic-2I7YMYIgt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzN5urTIqZK5v83cJ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxmGP2Pcrbh9TJ684l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz2lveUJGd54pTGfSx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzGFVY-lT9hJTEEFBB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwhqe2FiqCduU-xccB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]