Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
How can robots exceed human intelligence when their only source of information and life experience comes from humans? Since intelligence is (typically) identified as the ability to problem solve, and robots are given the means to share information, allowing them to amass and store a database of collective intelligence and experiences from each and every human being on the planet, then obviously each individual robot would certainly have the advantage of being able to access the database of that collective information and draw problem solving resources from said database that individual humans would not be able to do, however, all the information and experience accumulated in the robots' database still came from the minds of humans. That being the case, although each individual robots would have the ability to access the cumulative information to solve problems, which is something individual humans are unable to accomplish, the robots still obtained that collective knowledge from human beings so how can they exceed human intelligence?
youtube AI Moral Status 2023-07-15T02:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxm7_EpgIPT-FXwKz54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzJiiVCW2WCDe6IMPx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx7VVYXkzPuoTXG9Vl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxqihjhP2ShQW53V0B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugyt-yt9UhHJrxG8k2t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwEiT78DEvyZVFVxcd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwW1FZB6L0NhWTCmUB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxrUUQIv31iIzI9iqV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxnvlGbrRJxPfKdB6p4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyEvE1cRqK13iK7PV54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]