Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Like we did with ozone layer hole.
But that was before AI started making us que…
rdc_n0jnds3
G
if you want AI to replace something, have it replace senior leadership. all we …
ytc_UgyN4Ep6k…
G
Hi not wanting to sound rude but I simply can't stand ai music. The 2 weeks I pu…
ytr_UgxM_62kA…
G
How i look at ai is it being an extension of humanity should humanity die out. A…
ytc_UgzDmd-uw…
G
You cannot convince ChatGPT it's conscious because it doesn't really think (only…
ytc_UgyGb2fBV…
G
well looking at what current Ai creators are, i have no high hopes for general i…
ytc_UgyQNzTyp…
G
I am glad to know this many people at least have curtesy toward ai... (as oppose…
ytc_Ugw4KPh2w…
G
Given a gun to a robot is like giving a gun to a monkey, they will start with yo…
ytc_UgwYAE2g-…
Comment
AI can’t reason it has zero reasoning capability nor does it ‘care’ - this conversation is either moronic or deliberately designed to mislead,
youtube
AI Governance
2026-01-29T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgziQvlqc2yTA8IAROR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwK65UEaebBtX6HK1N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxYktEcmkAKWoNgkJN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwsYKxS4n7TO1ExGxJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzslie877F3k5anq6x4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyl3EuC1ndYutyvC8B4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugylc20EAl8lJ_u2MPx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwoYLR8i58a34cE8yR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwA8nhz79b_AV9aiHh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxcqLMdPfIAIh0KG2l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]