Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have never, ever seen a disabled person openly support AI. Ever. And yet the a…
ytc_UgzD8ExQJ…
G
He needs to go through infinite points where the distance between the points con…
ytc_Ugxk-gflh…
G
“8 trillion words in a single month of training” read that again. I start my BAS…
ytc_Ugys8zvQZ…
G
urm if anything i think this strike is actually boosting the point of ai.. ai do…
ytc_Ugz8zbNMA…
G
they allowed to have access to weapons of war already the same shit they trying …
ytr_UgxhDlo0W…
G
Regardless of your opinion on AI art, any attempt to poison an AI model with bad…
ytc_UgyDDnX97…
G
If ai takes more employment. Less people with jobs equals less people spending l…
ytc_UgwLVnoQx…
G
who cares if ai art is even a thing, like bro artists used to die broke for 99% …
ytc_Ugw1qmNhm…
Comment
Humans naturally want to remove anything we feel is a threat to us, if we we progrem a robot to think like a person, why would it think any different? It isnt an If just a when.
youtube
2025-04-07T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz2EZiN34nqZCEXM-p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz_1CIk_ZrSFavXb_N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzx75t7Rg_g4zvqjoN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwFUvEjUenXJIOOEO14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzByUwBiUaVtYrg43l4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgOBiHL3DiLi4wP_t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzSzfIzJT53fXdOXHh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwNyMxJkH4kRj3UQCl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwTLqjSDo0vXRHEtbh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgximXp7xgCz5cczLlF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}
]