Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI should not be considered any different from nuclear proliferation.
You should…
ytc_UgxO4S8IS…
G
@callingyourphone1212 do you think I'm guessing? I'm not, this is a fact. Facia…
ytr_Ugw_KGu_S…
G
"It wasnt just an upgrade-It was a revolution" dang how infographics have fallen…
ytc_UgzR3Xx-7…
G
Some of the details:
>The British engineering company Arup has confirmed it …
rdc_l4gf3hn
G
Too many small businesses are too slow to adopt tech. It will be their doom and …
ytc_Ugxe-sqnL…
G
Ai is a technology , the question is do humans take jobs because of AI , your se…
ytc_Ugz-1Hx0x…
G
I wonder, if you took a robot's mind and placed it in a new robotic vessel, woul…
ytc_UgycW7CHT…
G
Surprisingly, I actually agree with this video mostly (don't destroy me artists)…
ytc_UgxshkCDV…
Comment
People keep arguing about whether AI is conscious, but that’s not the real risk.
AI doesn’t have intent. Humans do.
The danger isn’t machines “taking over” — it’s decisions being automated without clear ownership, limits, or accountability.
When no one can point to who decided what and why, fear fills the gap.
youtube
AI Governance
2026-01-03T06:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw_JSsBCQpzBqqy0oB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyCRixifF7HLW7BM1F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzUiLk59DXL1ibTrLR4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugxtw6w_XW2bAnA7Fgt4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxqkj1Gqye2Hq-p1XZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQYSWU5da5awfkxox4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzMfJ7kX7dtNEEQJrN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzZsYahu0QiRUv_WqR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzfiHUY0bl7hhhuu6F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwS3qNaLUT6e59GDtZ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"fear"}
]