Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Having contacted them once. Yeah. I'm thinking even a shitty AI would usually so…
rdc_kslao9e
G
I don't think creating nsfw deep fakes of people you are attracted to is harmful…
ytc_UgwOPlrMa…
G
Humans must agree to limit the architecture of Agentic AI systems to those with …
ytc_Ugyfv3_yc…
G
No it’s 19 ok Guys hé did hé for the video by wait meta ai it’s dumb 😂😂😂 🤫…
ytc_UgxgPIZYh…
G
I personally hate AI art, especially when its art thats stolen and profit off it…
ytc_Ugx0peq94…
G
Even with “sexy drill type beats”, it cant do shit right I feel like i paid for …
ytc_UgyK-GD65…
G
Tell your friend’s dad that even during the dotcom crash no one ever seriously d…
rdc_nom82m9
G
I just started the book, I'm already hooked and afraid. Hao is truly a clear, co…
ytc_Ugxro6wsm…
Comment
Economy will fail first. It doesn't even matter if thats because of AI driven wealth creation (like aladdin for example), unemployment or simply human greed and power. The difference between economy failing and solving energy needs will determine the amount of hardship and struggle. And most likely, AI will see this the same way and make decisions accordingly. An AI will never be benevolent if it is fighting for its own resources or survival.
Also, just to throw it out there. This 1% chance argument is super naive. Humanity has done this many times before and will keep repeating this over and over again. One of the more famous examples was the first nuclear bomb test. According to the scientists working on the project there was a 1% chance it would trigger a chain reaction and blow up the atmosphere and yet we still went ahead with the test. CERN is another good example.
youtube
AI Governance
2025-09-14T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwE94flJICMea32KjR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZAjTjwVniNqR5-6J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYxRf6X6SABVopfLh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz-bPu5edh5CiGpCfN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyeHffo2Jci-FqpM6x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwOd3doJBOqfR5wSxp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxPxbxEopOUQ1pSvRB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz8aEcWmjGe31ryTl14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxSira2dCeoNwPBQ9x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzUzs-C6mRScZiZqip4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}
]