Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A robot may not injure a human or, through inaction, allow a human to come to harm. A robot must obey orders given to it by a human, except when such orders would conflict with the First Law. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law. Should have payed attention, AI is way too far along with no real safety implemented.
youtube AI Harm Incident 2025-07-23T21:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxRZEd2vSbZHqDLz2l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwZu4CZr84MUZLCG5V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy_jHCUbAYBOEzzASp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx7zl-aUAs_FfPyf1t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy2j41VU2PcxMXA3il4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxfMUxc0xd00HoltX14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzr-rECvpPZNHCtQod4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgypWaUkG2CxVC7dib54AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyOWvLo54pXmK2bWL14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz7Ker4IpncoiFIc7F4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"} ]