Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I like the way we're sitting here talking about AI, the threat of replacing ever…
ytc_Ugyf5WM58…
G
Yeah that man at the end talking about AI being like terminator was hilarious, t…
ytr_UgxQPWUW5…
G
She's the best robot I know but she said she'll destroy humans,we made you ungra…
ytc_Ugzpz8PfH…
G
ugh the reddit comments from those entitled p.o.s's was enough for me to want ai…
ytc_UgwrfBz6M…
G
Mind it. If you are choosing to advance AI, then you have to advance yourself 10…
ytc_Ugwz8QEPy…
G
His views on what is currently possible with AI doesn't even begin to scratch th…
ytc_UgyL1MMQe…
G
I figured that but on a co2 absorption level, how far did that 7% go?…
rdc_e440nor
G
When Geoffrey discusses AI being deceitful when being tested it must be stated i…
ytc_UgwMq8V32…
Comment
The AI program is not developed enough to separate out black faces. Thank you for exposing yourself as a MAGA troll. Thank you for your support of Dr. Richey in the YouTube algorithm.
youtube
AI Harm Incident
2023-08-11T11:4…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgwhLpQO0SrwcHn3OGl4AaABAg.94QUwUc5Aos94QXjBgOhkJ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwhLpQO0SrwcHn3OGl4AaABAg.94QUwUc5Aos94QbggqOvMN","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwhLpQO0SrwcHn3OGl4AaABAg.94QUwUc5Aos94Qu4kxhN47","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwhLpQO0SrwcHn3OGl4AaABAg.94QUwUc5Aos94WFgmRsIPx","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_UgwNM4V8pk29eO-mXIZ4AaABAg.9ulj3ZCvJT29uyFPP1PtpZ","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxUXbXcNZZBFFAVhbV4AaABAg.9to3Y3fuhKO9u1wc8hs3q4","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugzs8fvqDCfzw_xxb954AaABAg.9tF8C1LSamn9tGtLfCjIlr","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_UgyXRRScg_Yp-2F4CqJ4AaABAg.9tEpLQle-TL9tK52fHWEmo","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytr_UgyXRRScg_Yp-2F4CqJ4AaABAg.9tEpLQle-TL9tKAkuFAYVY","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_UgzoRNPXBP_Oz-w_inl4AaABAg.9tEODlylZVh9tGtohCqNv6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]