Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I guarantee you that when the world becomes automated and full of robots the mos…
ytc_Ugx24OszA…
G
Did I hear that right?
The AI is asking for its creators to ask for his permissi…
ytc_UgxrpYQP3…
G
AI being used as art by people is a bit like someone using Photoshop to put thei…
ytc_UgydIOdPx…
G
What I hate about AI creation is that they flood feeds, fyp and other starting p…
ytc_UgxsHlyYU…
G
Humans are mostly driven by selfish genes which makes us cantish, so bring on AI…
ytc_Ugxv6EGF3…
G
The US govt thinks they have to be the world leader in everything OR ELSE they'l…
ytc_UgwFBkagB…
G
Quick question! I think I know the answer, but I'm pretty paranoid, so...
I use…
ytc_Ugx7Iejq_…
G
This is how UBI Universal Basic Income will be implemented. AI dominance is a pi…
ytc_UgyjHQ8ON…
Comment
what's interesting, is that this proves "AI" is not "intelligent". If AI, could actually "think" it would have KNOWN this, by it's research into information before giving the answers/responses.
It's artificial programming (hence, why the chatgpt employee had to "gratuitously" update the data, he had to PROGRAM THE AI to KNOW it was "bad" for ingestion)... AI is programming, not intelligence.
youtube
AI Harm Incident
2025-12-15T21:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgyJMtZTrLQjQHslsIx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxb2J_MtIwbBTomBnN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz8iqQuniR6PB8gSvN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzy9dAJIkNKoKPaqq54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzVFI972ZrwgVGf3Wx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzyXsz-lMN9BIha3bV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwjIHYU5bF6Q6ao0dx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwW5BywVeSIjHa8qrx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzwwI5zw9-poPd6GA94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzwR30cf-HdhYrpW714AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}]