Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But isn't PalM2 a super old model? They did not use a recent Gemini model?…
ytc_Ugx4rC1Bo…
G
@Infiniti12345read again if it can integrate clinical data it can integrate ev…
ytr_Ugxp6UHG0…
G
I would love to know what bad things this guy said elon did, he mentioned 3 good…
ytc_UgwW0wlgR…
G
There's still more effort behind professional photography than AI image generati…
ytr_UgwSu554B…
G
Learn all that just so his AI can learn off it and make the knowledge useless to…
ytc_UgzRJMeXe…
G
Dr. Klein looks and sounds like a supervillain who could conspire to create a “m…
ytc_Ugy1EflJj…
G
This is why we shouldn't use driverless cars... THIS SHOULDN'T EVEN BE HAPPENING…
ytc_UgwdIVHhA…
G
@ChippWalters again human fallacy , you said you never seen an ai copy an image …
ytr_UgzhY_YQF…
Comment
The problem with this is that AI is sycophantic, innit? It will always lean to what you're pointing and justify it however logically it can, even if you say you'll drown the kids yourself
youtube
2025-03-15T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzkLRRgnS-hz69U5-h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxB_rep2-pYyO-mrjl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwGiui3j5r9q2ghJvh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwm3rShyEM5d5_YBXR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzsHPX7GFkQiX8G_VZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxAzu0FNmPtVAg-7iV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzw7sR7nZxFf0qyuG94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwl17iy1D_ntcKyUEB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy-lXzd5VODQaWHL154AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx0c1q96r7N_f1F2BR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]