Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
REJECT AI OR END UP ON ROADS. YOU ARE ALL SHOOTING YOURSELVES IN THE FOOT.…
ytc_UgwknSG8G…
G
I know it's difficult to see from your perspective, but I feel like for people l…
ytc_UgywyGiNY…
G
What he said isn't accurate, but their system prompt likely does start with some…
ytr_UgwQ6laHw…
G
So that's how we doin it. Stealing designs from AI and make your own art, coz AI…
ytc_UgxxOZRsQ…
G
AI is a tool, and it should be treated as such. At university, my professors hav…
ytc_UgzNfAmci…
G
"a camera does not create anything for you..."
Then what makes candid photograp…
ytc_UgxGYNnlV…
G
what the hell is all of this a quick cash grab on ai hype ?…
ytr_Ugy4UJB48…
G
Every interviewed AI researcher is one ;) However, Hinton was a rather big shot,…
ytr_Ugwvy1y_O…
Comment
the problem I have with this is this: robots (even AI) require programming. Lots of decisions are just opinions, not 'right' or 'wrong.' so who's opinions are going to be programmed into these robots, from which they will make decisions? Is a robot doctor going to make the 'right decision' in, say, continuing treatment after a patient refuses care?
youtube
2013-06-22T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx1owY8KMQQpaUVoL94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxCHEe9gdHj20ZduC54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxKrONnTALkMSiXPy94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyndkOOK5UcwRDL7P14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_Ugxf1ljBqJhDZTII-rN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVWbMuIk6D-APhN2t4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwIMriQa68OOqDcqr14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzbBcYCXu1UFiCis_x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzZyZyocjtkajZUTrt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyyGJ-lF5FZQ7m-da54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}
]