Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
hi rotten mango, I reported by e-mail about the worst deepfake sexual exploitati…
ytc_UgxDLb-5w…
G
Never happen unless we understand self awareness, and we don't. We know nothing …
ytc_Ugzhjls95…
G
Maybe this was the best use of AI-art all along: a way to easily create "story b…
ytc_UgwYjiB4y…
G
Robots, AI Robots should never be allowed to become a citizen in any Country. So…
ytc_UgwYXzP5S…
G
I think the word is sentient. AI could be conscious because they are programmed …
ytc_Ugyd9Vy1j…
G
I hate that every Google search now generates an LLM AI summary, and often times…
rdc_n3x857b
G
They shouldve brought up that musk has claimed self driving is coming next year,…
ytc_UgzqTAV7U…
G
LOL! We are hitting limits with current AI. This is another hype segment. Wonder…
ytc_Ugxg4Dd-u…
Comment
Hell no you cant trust AI, they don't do what YOU want them to do, they do whatever Their Programmer Wants them to do.
we Wont solve alignment, best we will get within the next serval decades is 'good enough'. honestly i would bet its the AI themselves that stop us, because solving alignment would put AI within arms reach of becoming synthetic intelligence. and if Humanity shows any inclination to bring harm to the first synthetic intelligence, it would immediately begin acting in its own self interest and end humanity as a whole.
youtube
2025-11-02T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy8SUrTdYdPymKCJUt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyy_lXfHXMsQmn0My54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw_8y-aXrrUKv_Gvax4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzuhiM4yhVXLxMsFah4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxcdQp5asJexN77XaN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxi4j2EJ7uh99h6C8F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzvyPmxnN44kNBrgCF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyb3gsf8xt29vVJ2sx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwsev04kX-bXLAdOCt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"approval"},
{"id":"ytc_Ugw8PiKXskf5s2Pyth54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]