Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's going to be a full-time job just to catch ai's mistakes seriously I'm not j…
ytc_Ugzsv-2Po…
G
Large Language Models are not capable of logical thought of any kind. All LLMs w…
ytc_UgwVgPXBH…
G
I do have a serious question, is the person who when through the fairly difficul…
ytc_Ugzht5Vy4…
G
I seriously hate Ai, like I want to look up actual artists instead of Ai ones, a…
ytc_UgwX3GYGQ…
G
This COULD be countered by another advantage of self-driving cars: instant commu…
ytc_UghSFLJx8…
G
I think there are parallels with the dot com bubble but it's not exactly the sam…
rdc_n7ynqa5
G
No one will listen.
Ever. People's jobs force them to have allegiance to the…
rdc_deghl3f
G
And the benefits are created by stealing in the first place, if they need ai for…
ytr_UgyMveNtO…
Comment
Basic information technology science tells all of us that if AI says something you need to corroborate it. No exceptions. If you act upon it without cooperating, it's all on you. That's science. In math. Trigonometry. And pretty much the right thing to do if you do anything different you're on your own. It wasn't you. It was AI and you asked it to do exactly what you asked to do. Now you're on call.
youtube
2026-04-07T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzetj1DTtULOkaqZ2t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZ9F0uyMd-34AOVu14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxc1GNh9iNL9nxY9ZJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxp7rpcAIxP8PSfY0l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyRXXht6kx9ZSK5ezh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugws3HA-7Av4Vz2zNVV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzWOBJ3budpGw8q-sJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugykbd6qbdkoWwfJWTR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyUzp8RGocHh7SlcBJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwQiAC2cr1VqLwaC6Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]