Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here are two issues you left out: 1) Robots and machines are very capable of error. Replacing them in a human job could result in any number of accidents. There is a real life story where a computer was sorting chemical medicines, got the numbers mixed up, and the result was the wrong medicine getting sent the wrong people and a dozen patients died.  2) The fact that if A.I. gets so advanced that robots can think for themselves and learn from history, why would they want to work for us? They would see how slaves were treated in early human history and say "what we do is the same as that! We are being treated as lesser beings when we have the potential to become so much more."  So it might not be ethical to consider that A.I. can be dangerous to human progress on account of errors and too much free will.
youtube 2014-06-06T02:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugh1jhhjoeswOngCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgikE7cscFoFZngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgibXj9_9Rmj1ngCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugh_UZIx6ky63XgCoAEC","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugj0GwHi6e-QIXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Uggd38O8HxeKVHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugiu_mae3HiDB3gCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UggAJkzm1ubmNXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Uggkt3haEyISYHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgjQSuOh0GT87ngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]