Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well I could see where the Defense Companies would be waking up realizing this m…
ytc_UgzdifSpd…
G
A loss for ai art, win for AI overall. Hopefully they'll now focus all of their…
ytc_Ugyf1_dsi…
G
I dont think AI will completely replace programmers. If anything, it will make s…
ytc_UgwJ4Vfzk…
G
@MrSirFluffy seriously, they’re conflating two problems: Atrioc jerking it to pe…
ytr_UgwlJdkTd…
G
Every million dollar cut of executive salary will save them, what, less than 10 …
rdc_czls228
G
This sounds awesome...and then there's the $50k/yr tuition. I'd love to send my…
ytc_UgwrCUlnP…
G
" if a robot became conscious and demanded rights , then murder and torture are …
ytc_UgxiLOkTX…
G
last thing i need is a robot calling me a punjabi dingadingaling while im just t…
ytc_UgxBaHmt8…
Comment
before this video is getting any more gay. large langue models sadly trained in part in the social media garbage and some very trash content found there. but there is one problem with all of this whole fear. you may know that theese models wont run all the time just if you promp it. also often the output is sampled so the model might say something what it was saying otherwise. most importantly even current ai models are often very dumb.
youtube
AI Moral Status
2025-12-20T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyv8XOTvvpYlfN4vu94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwY34_mSErYiBkx4D14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy3gwg1jEnMrwsYFct4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy6m7BzIxf4ah0N9tF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKpsaGVSK_OMw_2_R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxBU67NVmpNMWtiso14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzQQNWqLYdKaHB-aFl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxlc7FrAshjcNZdFth4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx0-0OdH5_6eb3-Qr54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwM6QvF5Sl1BxrKMfd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]