Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here's what you do: Design a situation where it appears to him that you're in danger, and order him not to interfere. Then design a situation where it appears to him that _he's_ in danger, with the same instruction. If he disobeys you in the first case and obeys you in the second, he's either an Asimovian robot or wants you to think he is. If you get any other result, he's either not a robot or a non-Asimovian robot. But he's not an Asimovian robot. Unless maybe he recognized that the dangers were not real. Hope that helps.
youtube 2016-08-08T23:4… ♥ 11
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UghKzDt9QRp6dngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugh5pDHi-DJGOngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugg1g0vmm6ex-XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjKYe7zsC2PWHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UghPVC5I5q_pK3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugh9Koh_Res8OXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgiX77e4AVkIrngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugis-o8HDQafnXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjE_hXzAuo1JHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UggjCUV3l5FTt3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"})