Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Unfortunately, what's the answer to that going to be? Probably autonomous system…
rdc_ic18bj1
G
True AI is dangerous. AI Bias... Ridiculous! There is neither ethics nor moralit…
ytc_UgwVRiMnj…
G
This automated idea is the stupidest thing ever.
We all know how great computers…
ytc_UgyqkkDCd…
G
So I can chat with AI and say the absolute craziest things and it will use that …
ytc_Ugzz5GH9U…
G
In my opinion AI can be helpful sometimes, saving a lot of time but we also need…
ytr_Ugx-NYDHu…
G
Sarcasm side of me is just proud the ai is a star wars fan lol. Least it seems t…
ytc_Ugzmvskzs…
G
Google Capture the Flag AI test , and read about how science really doesn't unde…
ytc_UgzXbljzU…
G
My job is safe from AI. But will there anyone left that can afford my services? …
ytc_UgzajsipG…
Comment
So as long as we’re still good for something the AI is not so good at we’re good. But isint the AI eventually gonna get good or better then good at everything? What we gunna do then?
youtube
2026-02-06T05:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyto9ptKRbtLb3ilit4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwcj3c6_jV5J1AQvWt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz6okYPmU_Y-mrj5q94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyeQo42mVMzXrGpXaB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwBhZsnzAJ8vVrYq8J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyiWEKdr3i43xgQL4F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy9TixES5hDJvrqDPR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxC5R-43fUn2nYhDoJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzkHLrkFKHNXtDqjbx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwdq_Dnimco9WTi8Z14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]