Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What if we just programmed the A.I., to take pleasure in performing their functions? They would still have free will, but their instincts would urge them to complete whatever task we want them too.
youtube AI Moral Status 2017-02-26T01:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgiNg7c3pMvULHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Uggy6hX90V8Bj3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiG5E4qLkQiQngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugi92fRXPQ52ongCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugic-HoP48D9angCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiIqU7cp0FOSHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgjHvbFW4ZIPD3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgivkYbakNg6Y3gCoAEC","responsibility":"company","reasoning":"unclear","policy":"liability","emotion":"mixed"}, {"id":"ytc_UggjnKNVIubF5HgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugj_DdZ4a5jyVngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"} ]