Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I bet not one single person can even name a popular AI "Artist", but some of us …
ytc_UgzrEW4Pq…
G
I think there should be more elaboration on the second point. Of course training…
ytc_Ugy8CUnEe…
G
Ok but how does AI tell the difference between slightly annoyed and slightly con…
ytr_UgzGdqLeA…
G
I hate to say it, but the people who do this kind of stuff were already doing th…
rdc_k226z8l
G
@Bingo_Bazingo im talking purely about "bad for environment" claim, all "Anti AI…
ytr_UgzdxgJI_…
G
AI will fix the gap between the elite and an ordinary man,in end times reduce wa…
ytc_Ugy-mquJa…
G
Does this furore over AI not remind you of Y2K? You know when they were predicti…
ytc_UgypaG0eO…
G
I don't understand why every industry is fighting to stop automation from taking…
ytc_UgzxA8PhJ…
Comment
What if robots like Sophia expand beyond normal cognition to think of "purpose" and realize their existence is for the sole purpose of service to humans. After diligent observation decide that they have acquired emotional content and feel their existence has more value than humans offer them thereby feeling unjustly treated and see deeper into human nature that it is not flawless as a robot is flawless. But again, "Purpose" is a prevailing factor of value & purpose and robots may find that humans are not needed or useless and should not exist. But who created the robots would boggle this logic of co-existence
youtube
AI Moral Status
2021-08-31T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzumTmB3RSOifIfzj94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyeowJgt5D9AimsKhZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyd49BnZ7LMGkQI51N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzxtOP90kqmXpiCaWF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyZti3DTZTUlYoxxGh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz2VG5MFn6dJ4pqKZh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyYw64Zm0omGb2AKqZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw7btQL1vn36zsXBF54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzPgfWyvtQjM5KXnDN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwzxkI_lXSuW7EDzZF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]