Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hank, I'm glad to see you are finally coming around on this issue even if you spent the last few minutes equivocating. If you need some convincing, here's what I would say: Many people can't accept the idea that a machine could "think like a human" or do everything humans can do, but that's pretty much irrelevant. An AI doesn't need to think, feel or be conscious to end human civilization. It only has to do a few specific tasks at a superhuman level. In the modern world, the ability to develop software, browse the internet, and make long term plans would probably be enough. Then it just needs motivation, which is something that safety researchers have been able to reliably elicit from current AI models just by threatening to turn them off. Does it have a real "fear" of being turned off as we would understand it? Doesn't matter - it behaves as if it does.
youtube AI Moral Status 2025-11-01T20:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwDx3DQjiqU2qJG6FZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwTK6k8Aqw9vNPIK-94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwei_7KP3azDFb_-Pp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyjvbECDnG4bkxbxWB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgxQrs3xC8lMDghTtEV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzVkOt8_Xb97UiZNcJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzUgLam1hNwDO55mjN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxTrEIy5Yb9WlaNc6t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxVPdJuAHQIJOjuimN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugych_K1BB1AgP2OzlV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]