Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think robots should be developed, viewing them as their own species which is still entirely dependent on us, and not as applications or simple programming, it is quite ad to think that we are keeping our own creations from improving based on how certain media persuaded us of a very likely worst case scenario, the movies are unprofessional assumptions of what may happen, taking example: Skynet, an AI that went and turned on us in a fraction of a second, this theory is heavily flawed because, who truly thinks an AI of this caliber wouldn't have been tested let alone be given immediate access to deadly weapons? In real life, we have ways of knowing how things like these would turn out, SKYNET would have been caught going evil with a simple simulation exercise. We shouldn't be afraid to improve upon this for fear of the worst, if that had always stopped us, where would we be today? I'm assuming not as far as we are now, we should do as the saying goes: Hope for the best, prepare for the worst!
youtube 2014-06-05T12:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugh1jhhjoeswOngCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgikE7cscFoFZngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgibXj9_9Rmj1ngCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugh_UZIx6ky63XgCoAEC","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugj0GwHi6e-QIXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Uggd38O8HxeKVHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugiu_mae3HiDB3gCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UggAJkzm1ubmNXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Uggkt3haEyISYHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgjQSuOh0GT87ngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]