Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
April 2025 half marathon in Beijing a 21 humanoid robots participated. The winni…
ytc_UgyGqT_g-…
G
A couple of years ago I was watching programs about AI, where it was stated that…
ytc_Ugw8VAijd…
G
This just proves that the people in charge have not got a clue about coding. I a…
ytc_Ugy8EDBKf…
G
Bro only white voice and i swear,im nit beeing a racist but try to make a jamaic…
ytc_Ugz_sQtOq…
G
Nice CGI but the argument is wrong. This can't happen because if people can't af…
ytc_UgzcegdDY…
G
I believe (might be wrong) people are not blinded by the money (except few of co…
ytr_Ugz9QYU3j…
G
A.I. is not the problem, it is and will be for a long time just a tool. How we u…
rdc_ktb07gq
G
Its about the work it needs to make something.
Ai Art is not original and doesnt…
ytr_Ugy8Qqrnb…
Comment
If we say AGI is more intelligent with respect to humans. It seems an overstatement as humans seems have intelligent to see life at deeper level like - ageing, change, existancial which does not only make human to take control over other creatures but to be compassionate towards them as humans are not different from them. For AI to be more intelligent than humans it has to see the life more deeper level than humans. If its just trying to sustain itself by killing humans it doesn’t mean its intelligent. Humans can die for being truthful by seeing the totality of life than to be self centric by harming others.
youtube
Cross-Cultural
2026-04-16T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw9QejjI9GYwZJ2M5d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzJaldXMbe1aicGFHN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwYRsjuzSrIvxa1UNd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzuv6UH8fVHV0rx7id4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwjj90M50r8cAyjhsp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxcfMmxdKle_KK7KF94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxbNXMYn89dWfwoSVJ4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwMf95yj4jHcttd0G14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy2nOTWZCLJ7Njc4wd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyQefmizPdDC819zNx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]