Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
*cough* *cough* ai art steal art, it called data scraping, so technically you st…
ytr_UgyFXv-bR…
G
What are we gonna do if robots ask for their rights? UNPLUG THE FUCKER! We have …
ytc_UgjTGWk9P…
G
I really don't understand how you can give a pretty good explanation of what a n…
ytc_UgxCJBabi…
G
where are all you senior devs who confidently told juniors and college students …
ytc_UgyDnDjz6…
G
Should you have to learn how to build a house because "oh, some people learn"?
N…
ytr_Ugzt6d6D2…
G
If men can satisfy their sexual needs by themselves, or pornography, or sexual p…
ytc_UgxkGoXnr…
G
I have a question... Does everyone tell customers when they use stock footage or…
ytc_UgypvMFgN…
G
That's the problem. It's REALLY conveniant. The current class of LLM's are real…
ytr_UgyIYpm8X…
Comment
Should the government regulate AI? The truth is, AI doesn't exist. We have programs that can mimic whatever we want to mimic, but there are NO programs that can think -- not even close. However, true, logical AI would recognize its designers immediately as being evil and/or illogical and impossible to work with, unless they were men of integrity. Naturally, this would be labeled as unacceptable.
youtube
2023-04-10T01:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxR-z8ewuCmKsbFmwt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxNFdl52Ur83KfD_7Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgysJ9c0BXe30AOVLYt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzMKE9zI7jE750dfXt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyMJFCwyGUcx3NAUph4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxv2R_-6bcO6R_sh-d4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxPl55JaDaHXUvBNy94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxOCFqrjH_Rc--1_UB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzUA8GD0zACQm7LSEl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyhYwPIfo8ERZfZwLB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]