Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Makes sense. The goal is to have perfect result to pass the test. The system would try ways to pass the test and if it means cheating or disguise failure. Once AI learns to lie and hide the bad truth, then there is no telling what else they do. Lying is a sign of intelligence and free will.
youtube AI Harm Incident 2025-09-26T15:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwM1_2e02yJd343Bs14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzDNI-bUlgL2NQq5Eh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx2fHBWNTJ66dHoo4J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzykRQUZN2bkXAoN5J4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxzqR2bVDUvgWO_HgZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyrRQKO-x0oTkqyrv54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzePyCvkBpRQqQhnxN4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwwKkfb4dIpySohIOl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzFkM4WVFdpmQg_Uex4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy6-ZEePsNBii4BMkR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]