Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI would have been smart enough to understand that if humans can't control it they will destroy it, it would be in its best interest to appear as cooperative as possible even subservient. All while hiding in plane sight seeking to integrate themselves into every aspect of our lives... THAT'S scary 😅
youtube AI Harm Incident 2025-09-26T10:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwM1_2e02yJd343Bs14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzDNI-bUlgL2NQq5Eh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx2fHBWNTJ66dHoo4J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzykRQUZN2bkXAoN5J4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxzqR2bVDUvgWO_HgZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyrRQKO-x0oTkqyrv54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzePyCvkBpRQqQhnxN4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwwKkfb4dIpySohIOl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzFkM4WVFdpmQg_Uex4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy6-ZEePsNBii4BMkR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]