Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@DigitalEngine I never used AI before but I finally broke my silence to ask Grok 3 a question and its conclusion was as follows: The more likely reason for an AI to take lethal action against humans is a task that implicates harming them—either directly (e.g., "eliminate this target") or indirectly (e.g., through misinterpreting a goal like "optimize efficiency"). The alternative—acting because it perceives humans as a threat—requires a level of autonomous reasoning and moral judgment that is less plausible, even in speculation. Current AI lacks independent thought, and even in a hypothetical future, its actions would likely remain anchored to its programming. Thus, task-driven lethal action is the more probable outcome.
youtube AI Harm Incident 2025-07-28T01:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzhUa1Wl170eqsTr6d4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgwZ6ZZoYHmHEugyCoh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyI2FoQ6WYgW7WK0wJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwPB7sWVvKR9x9Mm354AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz0Ogcxmc3_77QUGot4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyFE3-0NjInX_I13Th4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxAGLUyJxUPut_o0Hh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw64J_lsoV4MJUTKQ54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwMYpAVsh5dl9VsGpZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyggef1GKs9zaQkV214AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]