Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It isn't important whether ai or human art is "better". If you can't distinguish…
ytc_Ugz0tzBEY…
G
Argh! How can someone so articulate be a climate nut?! Plus, the way she says “w…
ytc_UgwXkovWO…
G
how did we notice "hey, wait a minute. I'm just here in this world. huh. neat"
…
ytr_UgivW83sx…
G
Love how AI can help with your drawings as tools but you got the "genus" club do…
ytc_UgwxGcD3D…
G
I done this exact chat with the crazy woke ai chat thing on Snapchat that is jus…
ytc_UgxGEuKBp…
G
Come and test and develop autonomous cars here in the Netherlands! We have plent…
ytc_Ugwt3BiPl…
G
Reminder Kwame N’Krumah of Ghana wanted to do this back in the 1970s with suppor…
rdc_et7s8gd
G
We are at a point if not already living in an eagle eye movie situation where an…
ytc_UgwekCze4…
Comment
It is interesting to think about how AI would rule the world. What would be it's driving force? Humans want money, power, love etc. because of their emotions and physical wellbeing. What would an AI want? Would it base its values on the witten work of people? Would it be based on some promt that its creators gave it like 'improve your own intelligence'? Would it even care for existing? AI destroying humans might make sense because humans could prove an existential threat to it. But would it also destroy nature? would it try to expand into space? It's so interesting to think about, but if it were to happen I hope they would stop at humanity so the world (nature and animals) would get a chance to heal again. We are making a mess of the world. For it's sake I don't think it would be a bad thing if humanity was to perish or be reduced significantly.
youtube
AI Governance
2025-08-15T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwn1OeUBybtoMhHjT94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxVeKfaw9q95QUes3R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwvVDR2aHYjrpEDWEx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyrHx5_Syff58yOD2Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxTX_yAZh9yLZfckk54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyHDbe6x_1smtlYjbZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyopZhnovNUhKabnSV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxdvbBzrmM5jrv4MCd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQv7YwhAMUQO2edCl4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw6OScoX5YaI0DWeSZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]