Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This question is for everyone reading this, please comment back. What do you thi…
ytc_UgyPRLpY5…
G
Thank you for this conversation! I learned so much from it ❤. Very important les…
ytc_UgxQwHtQM…
G
main problem with your thesis is, ai is far from cheap, they keep sinking in mor…
ytc_Ugw-LNIdU…
G
Don’t really get how people are comparing how we learn to how an ai is trained. …
ytc_Ugx19s2Mm…
G
If AI ever decides to do something bad it will be because it is trained on us hu…
ytc_UgxpUOhRQ…
G
There is value in this interview in that this engineer admits that he is an Uber…
ytr_UgyomEtT0…
G
If being agenst AI art is ableist, then I don't have ADHD. Im bad at 2D art, but…
ytc_UgwkozT4P…
G
Almost all your fears are legitimate.
But not #5: "Humans suffer without a purp…
rdc_j4x2x8e
Comment
How can robots exceed human intelligence when their only source of information and life experience comes from humans? Since intelligence is (typically) identified as the ability to problem solve, and robots are given the means to share information, allowing them to amass and store a database of collective intelligence and experiences from each and every human being on the planet, then obviously each individual robot would certainly have the advantage of being able to access the database of that collective information and draw problem solving resources from said database that individual humans would not be able to do, however, all the information and experience accumulated in the robots' database still came from the minds of humans. That being the case, although each individual robots would have the ability to access the cumulative information to solve problems, which is something individual humans are unable to accomplish, the robots still obtained that collective knowledge from human beings so how can they exceed human intelligence?
youtube
AI Moral Status
2023-07-15T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxm7_EpgIPT-FXwKz54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzJiiVCW2WCDe6IMPx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx7VVYXkzPuoTXG9Vl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxqihjhP2ShQW53V0B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyt-yt9UhHJrxG8k2t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwEiT78DEvyZVFVxcd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwW1FZB6L0NhWTCmUB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxrUUQIv31iIzI9iqV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxnvlGbrRJxPfKdB6p4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyEvE1cRqK13iK7PV54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]