Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Embodied AI can do anything a human can do. And the demonstrations by Chinese robots over the last month show robots with human agility and speed jumping, kicking, flipping, doing cartwheels. Likewise people have put AI into robot cars with visual sensors and allowed them to wander. It's just a matter of time for touch, heat, cold, taste, etc. The "singularity" also includes the "rampup" time (which I saw first considered in "Superintelligence: Nick Bostrom, Napoleon Ryan") . The rampup from subhuman to to human to super human intelligence could be minutes as neil proposes... but it could also be seconds... or weeks... or months... or even years. If it takes Ai several years to ramp up (or several decades) then we have time for society to adapt. If it takes seconds, not only we might not have time to adapt (with economic collapse and other consequences) but AI might act more aggressively and have a misalignment or failure of friendliness due to self preservation goals.
youtube AI Moral Status 2026-03-01T04:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxMUKhvT2axm6zay-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwYT7SqIQREwO_nf9J4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy55HRusgII_JqtZ9B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwmYA3w_Qs59dN_y654AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyzAkfq4ft_jyJa2t54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzOl62x8kwtxGngEU54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx7tiWXl895cx2tfgZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_Ugw9WPHHmrgMbtn9UnV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxmDxH5b-VFniKDwFt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugx3Q27nZJB4-FBWHrp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"} ]