Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think conscious humanlike AIs are really the problem. The hard question comes when we make AI that is just as smart or way smarter than we are but doesn't have a consciousness. Giving them sentience in the first place apart from academic curiousity seems like a silly idea in any case.
youtube AI Moral Status 2017-02-23T14:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgjrmoGYAAEue3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugi1Pt8MqFieYngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugg_s6KlAogU3XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugi9VBjs69mr6ngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UghgyO9SEyAWYngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjWIUkzgDDkGXgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugho31qILwfle3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgiVEECnLWqvWHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgjLsMyw0B8wCHgCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_UggpPbJsPQwWkngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]