Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The closer we get to a true AI the closer we get to creating a machine that can store our mind. Then eventually there would be no need to create AI. We could just put those willing in the machine where we would no longer be limited by our physical and chemical imperfections that cause us to make mistakes. In these machines we could explore deep space using less resources.
youtube 2014-03-05T06:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugh5V3YOV93rWngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugg5goCP14AT1XgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgiXS1llO95FSXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UghfVCn1IgUzkngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugi88SRx7ZiuM3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgiiyHwWPyzCXXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UghXv2x0v1pR-3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UggQN5wdCp2br3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugjk73oPCQ1qY3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgjWDAxbrHyy63gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"} ]