Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So how do we know it's not playing with us now. Maybe it's more advanced than it's letting on. It probably figures it's safer for it. Also AI wouldn't use nukes. It would need certain infrastructure to make dones and such. So it would isolation what it would need from the destruction of other systems.
youtube AI Governance 2023-07-07T03:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwDZd4iA4Wo0ie0vXR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy_Daysgtmt50CSqOJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwoQSnxKQHVuBOtuEN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyrwCrLXvfklilSlsx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwKkZko7Q6x_jBr9Xh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzMSvX8VWSUBYzLjxJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwYJAAKtk6aTIocImh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy-UDTt3cjhTuABGBV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyfVCSpfRh782LR6eN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz2sO8Bs6ZNXhCe3PZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"} ]