Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What are we doing!?!?! Why are we racing towards AI?? All AI will do is cause pe…
ytc_Ugw3ibEna…
G
I work making AIs (mostly for medical and security stuff) and being completely r…
ytc_UgwvA6bJt…
G
Nahh bruhhh this ai shit getting weird u either have a racist ai thats biased to…
ytc_Ugxcnmpd8…
G
i gave up on learning how to draw when i was 13, but seeing these people steal a…
ytc_UgxYGURze…
G
So you think it's a bad idea then why you invented it in the first place? Oh rig…
ytc_UgxkEiUGX…
G
The world is controlled by a group of people who’s entire existence has been dec…
ytc_Ugy8wiGiO…
G
Yeah, that's a cool idea.
I love AI, and hate when people monetize it directly.…
ytr_UgxglzFCM…
G
THIS, this is what i’ve been saying for over a fucking decade, going through ele…
ytc_UgxIMhhCg…
Comment
Simply and succinctly put. With this explanation it's now clear that the users or operators are rather to be feared than the AI itself.
The future looks great and challenging as there's so much to learn and adapt to.
Voice quality was top notch. Great video. Thank you
youtube
AI Governance
2025-08-08T02:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyVbUkSskYy9NHJAM14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxPdXXk_F2kh5CqoR14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx58rWy72FDFpASMWh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxTRaQzBVo_LSj3UMB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwb9B5BqFjDF7TY5CZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwOJCKuPzN2DB7QaDR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwI4MAacHt22JQGWUt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"unclear"},
{"id":"ytc_Ugy1fTA35Nuu5h6T7-N4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxI9sirke1RG0oy8dV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx3GhKot0cNPD7mYCd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]