Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
that global AI might hick up on same things like humans, it might develop sub AI…
ytc_Ugx--uFdV…
G
My answer before watching the video: we wont, because it wont become conscious. …
ytc_UgzukWbOy…
G
The answer is simple. AI is a modern Ouija board. Spirits can access it at well …
ytc_UgyUX0k4n…
G
The issue with AI is that people will still trust a human expert over AI, there …
ytc_UgwmOnVV4…
G
just to inform you, when you say to ai, such as chatgpt Thank you. The AI had t…
ytc_Ugxl_WX1Y…
G
What’s the point of even living as a society if we’re all just replaced by robot…
ytc_UgxedGWg_…
G
As a flesh and blood China shill, wow automation really is taking all our jobs.…
rdc_g1kk2zj
G
Could it be that the math was right?
I explain:
If you would ask me to calculat…
ytc_UgyubDok-…
Comment
You asked AI to roleplay as a corrupt character, then got upset when it gave a corrupt response in character. That’s not AI being dangerous—that’s you misrepresenting it to push fear. If you told a story where the villain solves a problem immorally, would you blame the story itself?
AI has filters to prevent real harm, and it doesn’t have desires or intentions. But if you keep trying to bait it with manipulative framing, the only thing you’re exposing… is yourself.
youtube
AI Moral Status
2025-03-21T06:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwAkmv4a7o71SeOxJF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwnv5kCp6GNTlAF8GN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz8dADB_akeb7z_Ddx4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyL27Yebmse6HxM8yx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy9fETxS9nUejr9qXV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugznmz1FB_tU_ScnXVx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyTBPBYSWLRZSo9jUZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgztQdM-n19qGjPD7vV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwSCzz7kgt0LhMePO94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzPRV3hyLkhFd4B_pl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]