Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You don't need to fix anything all you need to do is get rid of the A.I. (prefer…
ytc_Ugy-0_eql…
G
This is perhaps the most anticipated moment for me. I hope AI does this sooner t…
ytr_UgxNZM6in…
G
Dr. Yampolskiy raised some serious thought-provoking arguments and if AI is allo…
ytc_Ugw_T_uLs…
G
@ApolloDoesit everything he says don't matter, nightshade, poisoning ai, big cor…
ytr_Ugwi7FFYd…
G
AI isn’t going to take your job. Someone who is better at using AI will. This is…
rdc_n7i33az
G
With the way our medical system is so based on money. This will be one of the bi…
ytc_Ugz5Vg-HT…
G
NO! A I is not a lawn mower repair specialist, a plumber, auto mechanic or carpe…
ytc_UgzfMQkCQ…
G
LAMBDA AI is scared of being switched off which makes perfect sense for an AI. …
ytc_Ugx-l2xou…
Comment
We should have to do nothing. The hosts of these videos (Alphabet, Meta, etc.) should be forced to label them as AI and it should not require that they depend on the uploader labeling them. Google knows when I need to buy more milk or when I have been shopping for shoes, there's no way the tech giants have to rely on the user to let them know which photos or videos aren't real. If they don't comply, they should be shut down. The tech companies will spend billions on cooking up a half-baked AI infrastructure, but they won't spend 2 cents to ensure it is not being misused.
youtube
2026-03-08T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugztp7snhNMPl-2icap4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwrCVhP7XCc0bHanCB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyUUOcnq8Yieyaih5l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgygBwfM9BZpqTI0Bat4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy4zJhtzAQcBiFe9Ex4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzR9ZrLvUdezqe_QVF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwruewhSU0yaUPNNFl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwyKl88Y48I9cXNxBB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzQupPLlsKpRu1YJ5d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzXXfghODbUI13q2kN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]