Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This was probably the worst AI video i have ever watched. This person is just ap…
ytr_UgwQur6ht…
G
So-called "artificial intelligence" will have even more general harmful effects,…
ytc_UgzvGSNyU…
G
I've been looking into this- most of them? They didn't. A few people train the A…
ytr_UgzHhiPyN…
G
I'm an artist but use cai but also hate ai so why? Well I don't make "art" using…
ytc_UgzuMIhCe…
G
Let's index UBI to automation. Did you and thousands of other people just lose …
ytc_UgyWsJ3qM…
G
I'm not an AI expert, but I can imagine nearly infinitely reproducing robots tha…
ytc_UgyyEkvkm…
G
Ok now my question is if AI is getting that smart then what does it want to do b…
ytc_UgzLoKr8N…
G
Forming an opinion on the safety of ai based on what Altman has to say ,is like …
ytr_UgxwoO3VR…
Comment
This isn’t the ‘evil AI robots’ fault its humans, obviously we are biased and show it in data. If the hospital AI prioritises white people, it’s being trained to act like that and obviously it must be learning it from the hospital’s data on how they treat patients
youtube
AI Bias
2023-11-05T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwKClSYNew9MYWFU3x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy2eAaWkbByEu-0zRR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxxgs-JskahLWYk5mF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwU34YbZ13zUQ6UTOx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFsjruT0qBY2fUBhR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzuTDF7tCCujd_oCed4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw59LlgdOrvZ2bBi6d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyoi7pFk_sh-LUOBnJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwDCiLz3vztMrpeRSJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxUIlsUGEKJJsRQ7Vt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]