Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People don't want or need that much stuff. There is already a lot of under-empl…
ytc_UgwSy7wk4…
G
People arent going to change, so if the fed mandated emergency automatic braking…
ytc_UgwZnN8ID…
G
it may be toxic to shame anyone and everyone i see using an ai profile pic, but …
ytc_UgzWKfFER…
G
100% this! As someone who takes a lot of photographs, I look at the world in a v…
ytr_Ugy0tlRyR…
G
Okay, all you need to do is make AI generated porn of Modi and you'll get regula…
rdc_jfa4ad0
G
Thank you! I really hate when people spread misinfo and "good" Ai gets bad rep b…
ytc_UgzE7egpK…
G
Okay, every time I have to listen to some "expert" on how AI works my braincells…
ytc_UgxW8qo1v…
G
"The robot wars will be humans vs humans and both sides will be controlled by ro…
ytc_UgweqTea-…
Comment
I have been running a personal experiment. Use AI for varying times (extensive to not at all). The conclusions are clear. The more extensive the interactions the more noticeable degradation in my my mental health.
I purposely have stayed away from AI the last few days, I already feel better.
Completely understand why some just up and decide poetry is the better career path. From now on I'm being extremely selective in how I interact with AI
youtube
AI Governance
2026-03-17T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxU0BrR8Ng_B8q2rEd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxA_AOLkUwpa6cUM5h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzWK55406De5A-6tSR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzmSI05Tkuwd_QGcFt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxwgVWKteUB-GMCqBV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwGTm8EadowUNOdw2B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgyeqTrAaXILy-sM0nF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxKrVTZXf39x8ThwTJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyQhPht2gFnzUypKN94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwAblayAej0t2wXSxZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]