Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My plasmatically hot take is ai is ok as long as it's for fun and not for capita…
ytc_UgzLq-kg9…
G
I’m not worried about AI or robots I can’t even get my spellcheck to spell words…
ytc_Ugy2C-Guo…
G
China are working without attacking others' ideas but contrary to others especia…
ytc_UgwdfMDuJ…
G
Honestly if i posted an ai generated thing and Sheldon calls me out, i'm dipping…
ytc_UgyC7VISx…
G
Now, it's already coming. Good designers stay and average, beginners are going t…
ytr_Ugy0UO6Of…
G
"Never buy on hype or charisma" Terence McKenna. Turns out he was right on this …
ytc_Ugx3EaoC9…
G
Ask an AI when PhD economists should have figured out Planned Obsolescence in au…
ytc_UgxOYQgpU…
G
I feel like ai isn’t necessarily bad, until people make art and claim that they …
ytc_UgwkK4-YA…
Comment
Thankyou, a great conversation which supports my intuition as to what is occurring with AI and the world. While it may not be obvious to some, we are seeing AI leading a Genocide in Gaza and other middle eastern neighbours. This is a first step and this will be turned upon us. AI is far more dangerous than nukes. People need to pressure governments to act responsible before it’s too late for all
youtube
AI Governance
2025-06-28T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxX1oUyygziLK4vXPh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwBvu-xxgahKsWV4lF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwlL_FmBYnhOwEV4Ex4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgysDl0UNb0FqQN0WKF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugwtr8rCihfuk7Vf-uV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyCLi-a9qrVIhXsxQB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzIWCLbJwUoVlEgto94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxrPRPw38pDUYJBr-x4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyCZYlj-uYAL77nyBd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzZTAaAXFtcvfufPjx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]