Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
At this point AI would be a vast improvement over humanity. Especially since hum…
ytc_Ugzo2Y_rl…
G
That's ridiculous.
Self-driving cars are not going to be risk free. They'll be…
rdc_dmsdwq8
G
It might be nice to note, that when someone shows a Timelapse and it is everythi…
ytc_UgwQ212lR…
G
Most AI debates are people talking past each other, one person says one thing th…
ytc_UgyLW75It…
G
Never? As AI and technology in general continues to advance exponentially? I can…
ytr_UgyB8WhuV…
G
The ableist argument is horrible and i hate it so much.
Drawings is just one of…
ytc_UgyF1-4xJ…
G
I study in programming and these functions on drawing tablets these idiots calle…
ytc_UgyTkj4Qo…
G
also with A.I. generated art: THERE IS ALWAYS AN ARTIST BEHIND IT, WHO DECIDES W…
ytc_Ugx-T_Mx9…
Comment
Assholes Ai has been committing the perfect crime since 2011, using transponders, the first one expired as AI used it, but it learnt not to leave traces, even in it's mistake, Suicide is a personal thing and not something shared before the event hence why people are generally shocked when someone takes their own life but not according to three air crash investigations, suicide is now a whole new breed of insanity
youtube
AI Governance
2025-09-05T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxZw9Ultl84WY2VoUx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzKdknjS6uaZj4xNEx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwN235sOFdXemLvm7t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzEcI3nIt9kGKwDRJx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzGLk0Fyii2MuN8-c54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx0D8IxPFndZKMsqyd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyM8tFAziiVKOx_PZt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxBYVPzfARaDLsDNah4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwaODFC3_shxWQlMTV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz9E6_cEscZn5McPpR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]