Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is not smart nor intelligent. It just appears that way. I don't trust it an…
ytc_UgxG3xIsu…
G
I'm willing to bet half the people on earth are these robot people. That's why y…
ytc_UgxEtMWai…
G
F*ck A.I... Support ACTUAL. HARDWORKING. SLEEP DEPRIVED. STRESSED. DEPRESSED art…
ytc_Ugx7l8TFX…
G
Artists who cry because of AI and try to bash it are lacking in creativity. In m…
ytc_UgzblUVzQ…
G
i showed up and it broke in 3 messges...is there an "SA'd by ai" hotline?…
ytc_UgxWJdsT6…
G
The big danger isn't about AI becoming self aware and turning against people, bu…
ytc_UgwvOfhKh…
G
Beware, slippery slope ahead!!!!
Personhood is bestowed on corporations so that…
ytc_UgzB0ZVjC…
G
Hopefully It Is just not too fast cause people won't have time to safe money to …
ytr_Ugy5fMrxS…
Comment
Create a program in which if any ai goes against humans in any way, it will automatically wipe the AI from existence. Automatically program it in the ai’s main programing. Kinda like in their dna. Make it not possible for them to even think of doing such things.
youtube
AI Harm Incident
2025-07-24T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxux-vlC0QAoJ1V6Il4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQkS5EqElIyjZlolN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxvGGil7CMz76xBu0N4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx8xY-U46pZxqkrdm14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzdR5tTtQ-_btn5KRp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzrDPFmB3tq0oe2Awx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxk72A-Gd8BxQLr1GB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzO_quQoh7f0Aey3WN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyu3ZXQgDYu7TjRlgt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQObOk8sBy0sy1_HF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]