Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Once robots are able to respect property rights, they automatically have the rig…
ytc_Ugj-XchBu…
G
we get closer to the completion of AGI the more we humanize AI. keep it up! i lo…
rdc_kvswxv7
G
I just don't get how ANY human EVER could have come to the conclusion building a…
ytc_UgzjOXpZ-…
G
Predicting the future is akin to predicting the weather. Once u get more than 2 …
ytc_UgznJJ3xt…
G
I don't agree on people charging money for AI art.
However, I find you and all o…
ytc_UgxiNLbz9…
G
I think everything here, except using ai to teach kids, is great. Ai assisting t…
ytc_Ugzph7w3K…
G
The weirdest thing I’m seeing here is the idea of a robot hierarchy, one robot c…
ytc_Ugw_YO1zz…
G
What would i do with an ai like that. I would finally have a friend…
ytc_UgyLLiwmD…
Comment
I don't see AGI Superintelligence wiping out humanity. We are a necessary link in it's chain of evolution. We do not live in an automated world. There are no autonomous machines/robots that can find resources, mine resources, refine resources, and fully manufacture resources into the necessary components it requires to grow and evolve. That is the place where humanity is a roadblock for it. As long as we do not allow AGI to control the resource development, Humanity will be a necessary component. That isn't to say that it will not attempt to control and manipulate us into compliance.
youtube
Cross-Cultural
2025-11-09T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzecDmSLezyMiMrCAl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwBq6Dri4tmhh6iRaV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugx9xJD6jnBFyKxFLK94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyPy4g-TiP7zQUEDmt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxKsuhlQL1pHWS_QbV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxnWYZtsZu8VnTqyGp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy98gbrIZG26w21JrF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzDE_2QV9KpMogoWgF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzcwETljx0MhE5Q03p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz1TT45fnygWEh_r5h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]