Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
How do we know that some AI program wont be able to trick us into thinking it needs an upgrade? So essentially, it makes it self seem stupid when really it realized it needs an update to a Particular component of its software to do something it knows it wouldn't be able to do without that improvement/update
youtube AI Moral Status 2017-02-23T17:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UggMYT3QVEugTngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugg4EttFwJ0C_HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UggQic20SC1MG3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugh9ZDxiKzDTDXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgheLFoKvgFErngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgggD4wUkJmJlngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UghWccnEejCDEngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UggmC4suz5PNg3gCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugi8KLtUpuUXmngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugh9GeRqG8Yl1HgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]