Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So if a human uses that same information/data to produce their own image and you…
ytc_UgyyoH8lb…
G
I just had some plumbing done in my house and there is no way ai can replace tha…
ytc_UgzYXdpmJ…
G
My immediate first thought is a pretty common set of phrases that internet comme…
ytr_Ugxnv-cXD…
G
Nation states will not survive eliminating most jobs while concentrating wealth.…
ytc_UgyaaD4ND…
G
I love this. However. I think students using AI is catching up with the number o…
rdc_nu290r5
G
the writes will start using ai tools for themselves, just to make their own live…
ytc_Ugxpzvimt…
G
In the AI war, Its the normal people vs the demons of the tech... Elon Musk, Zuc…
ytc_Ugycain_N…
G
Google engineer fired for being idiot. The AI is not sentient. Not even close. M…
ytc_UgxClMsLy…
Comment
We need to remember that the true danger is humans. In the video scenario, the first mistake was politicians trusting the new AI to make decisions on international politics. In another scenario (not this video) humans trust AI to make decisions on the battlefield, leading to castastrophic losses. It will always be on us to decide how much power we give the AI. And remember: We can always unplug it (in a sense).
youtube
AI Governance
2025-08-02T10:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxrvlogwJZB58hOB_94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwoUx87KXqIWbwN1Q14AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxGImwA63MuO5spN-F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzrSXfQ9Bjmmnq2FQt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzrhff0H_Xx4-NH7Mh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMTt3LriXXuv4xDjx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxddJo4Ept2MASBc114AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwsj7GVSNn7RBI2O614AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugy1ISDQceYi74QCqG14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz2IUP-6IzJk8ppuxl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]