Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Let them stop the whole A.i… people will die regardless in terms f medication an…
ytc_UgzGa0GPi…
G
AI is to be avoided above the maintenance and monitoring of systems, everything …
ytc_Ugwd6BuVF…
G
AI is very dangerous....computers have already ruined humanity AI is the final n…
ytc_UgxiL9CiY…
G
If this every happen I would feel sad for the robot because they will get abuse …
ytc_UgwfG-1Dt…
G
If 03 tried to do this Podcast, after 30 mins, it would completely mess everythi…
ytc_Ugzk47sA5…
G
“Doubt kills more dreams than failure ever will.” –Suzy Kassem
AI has made rema…
ytc_UgxdwUIpy…
G
Well done for sure! No robot footprints or ground disturbance was my first clue …
ytc_UgxVNSy9k…
G
I mean, if you want to have the same results, just make the "ceo ai". It will fu…
ytr_UgzV5UKQi…
Comment
Why don't we do this instead? Let's not even work for these greedy companies & we all start having farms in our garden & help eachother with food. Yeah, we would be at War with the government because we dont have the imcome to pay for the homes, but if millions of us do it, how will they kick us out.
Anyway, Amazon uses A.i. to quickly bring down hours for their workers, so I'm not surprised.
youtube
AI Jobs
2025-10-08T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzMC5SU9OCS_7c40kB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxyXPeCHIixDMyfDEJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwqkSs_dV8s7Z0TUEx4AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz0uu6JUn9s5VBX8vZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxkV4jVowlQPLlJbVN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy28Ec6XtBfpwWXIKd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzytRKRIVIXOyW-GTZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxj7w8tCI-VYmAxB_Z4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxqh-0vkhz2bAsEBZZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz5zy1guyZ6dct8Kex4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]