Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Excellent point. I have seen AI get basic information I happen to know wrong on…
ytr_Ugya5Zyef…
G
The distinction between tools and agents is the fundamental principle. As tools …
ytc_UgxQMAuuz…
G
AI should be used and developed in places where it would actually helping humani…
ytc_UgzmYYKVE…
G
@misticair i did. clearly not ai because if you did read it you can see for part…
ytr_Ugzq5z66M…
G
The video is pathetic. Let the viewers listen to what the robot is saying or at…
ytc_UgzuYwRPl…
G
The big thing is if America stops AI development, that doesn't mean other countr…
ytc_Ugzo-UX0e…
G
Lol… the issue might actually be that some “humans” aren’t conscious! and they l…
ytc_UgwgxkmwN…
G
Can we stop mentioning mecha hitler every time Grok or musk get mentioned? Just …
ytc_UgwmSr8ML…
Comment
One observation. With some jobs, some people may prefer humans (touch) like with massage therapists. Others may just prefer humans PERIOD. Sort of a bias if you will. Not something probably anyone really ever thought would even be an asset or even necessary.
I guess these people who prefer other humans are "Anti-AI-cists".
Guess Websters can add a new term to their dictionary now. 🤨
youtube
AI Governance
2025-09-29T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugxdp_5g7mOTYsbDm0x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"regret"},
{"id":"ytc_UgwBTPaLoIbdseVxDqJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxniTVnZn8cDGJFnTJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxggkFRYJYy-T-iAeR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxW5xB5MiWq0JZsmax4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwQo92iQ3BEzYXycuh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxtzfXPsm_t6WizcuN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJuzW6_WUCPs9dGOB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyPBgNvm9q4wXwd4H54AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzFsh9Aqy2pK2j06VV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]