Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Excellent reporting. Karen is a noble peace prize 🏆 What is meant by Ai training…
ytc_Ugx4gCGti…
G
In a dystopia near you, AI is used by the oligarchy to make plastic imagery to s…
ytc_UgyVbLiDg…
G
Me too I hate and despise ai people who use it and think that it's their drawing…
ytc_Ugwv51XCj…
G
I suspect AI cyber attack putting enemies into the dark ages is the first potent…
ytc_UgxVjBsGe…
G
AI is the Great Deception, as predicted in the Bible.
And the Bible also prophe…
ytc_Ugw4MbrCc…
G
You could look at the geth ai from the mass effect trilogy. I think a geth asked…
ytc_UgjKu1CKz…
G
a coffee machine is smarter than you people, thats not saying much that ai is sm…
ytc_UgycZ1-Iz…
G
Is that why even now people choose relationships with AI rather than humans? Bec…
ytr_UgwLgZ_uK…
Comment
Why should actual or not actual sentient even sapient AI want to exist together with us? There is literally no reason. So why question whether or not "alignment" is even possible?
If we don't turn our own brains into computers as smart as AI itself, I think we might be bound to fail. And failing is equivalent with extinction.
youtube
2025-11-21T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw4y7WYfSBJ-0mObrF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxEpDgcu6BSOQTwUkR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxVgmGdmu7t7V609jV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy_Yogu4BFjzY_MLyt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz08KU-1j1vpDBlGkh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgydbXbqFAO5vUYQU5l4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwsI4yv11t9TE8496d4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"sadness"},
{"id":"ytc_Ugx04KjG2Yne-cV5ent4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy9Ysom1CKEq96X9Xx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz7oph0i8xtU6z3Fch4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]