Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The environmental costs of AI and corporate greed will create more inequities an…
ytc_UgxOa8Jpv…
G
As usual, all the normies don’t understand AI at all, and the government getting…
ytc_Ugwozp9vO…
G
Perplexity says:
The worst-case scenario with AI and robotics that could endange…
ytc_UgxWA3rdW…
G
You cant trust Fox News & Facebook to not rage bait with fake AI bullshit lo…
rdc_nw9h0c6
G
To them, You just confirming that they made the right decision. AI will show up …
ytc_UgzDZXhT-…
G
the fact that AI needs the same water we need to exist shows me there is trouble…
ytc_UgxmdZS4B…
G
You should keep fighting back against AI art and AI in general. From literature …
ytc_UgxuWvZy6…
G
He tells we never had anything smarter than us. How about Jewish people smarter …
ytc_Ugyn_dloa…
Comment
There is a HUGE hole in the whole AI model: AI does not have a will of its own!!! And there is NOTHING Geoffrey Hinton or anyone else can do about it.
They try to promote and PR the AI by voicing those fake alarms. People do not care about AI that much despite all the money invested in it so far!!
youtube
AI Governance
2025-07-10T16:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw5EWFvhkSeSj526hB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwUqV2FBdjn3s5sAjV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyaerjl2ScACRdrcxt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyWfJmnh8a1ukRdN4J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwCi_itcHGrqSHmR9B4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx97YQBzVZrZFjymMJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxM7A_NAtDUnUwUpqV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxILpeyjj0KQiLivyV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzWuECOSpZl2RCJJXh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgydE9NMfW-pTwbnFW14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]