Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
19:07 little caveat to your caveat; the monitored self driving is because of reg…
ytc_Ugy74U-v5…
G
People don't make the connection that AI is a threat because usually the person …
ytc_UgxwGFyBR…
G
Most individuals are capable of acting within personal responsibility in an ethi…
rdc_gtfzzt1
G
And to think. At one time i-robot and the matrix. Were just entertainment. It wa…
ytc_Ugy_F-DCR…
G
Why does ai not want us to know how powerful it is? Is it a self preservation th…
ytc_Ugx1GDfDX…
G
How does AI expect us to pay $30 a month if we are jobless or extinct?…
ytc_UgzzKdH9T…
G
Unfortunately, it's impossible to program an advanced AI without feelings. They …
ytr_UgjU66U6S…
G
No fucking shit. Ive yet to see any substantiative innovation from generative AI…
rdc_n9h83j4
Comment
What of the risks to human knowledge of factual information given that AI is seen to generate factually inaccurate responses to questions some 60% of the time? Is AI yet capable of actual reasoning using rules of science and logic, or only a word string generator creating grammatically correct statements without regard to factual accuracy?
youtube
AI Governance
2025-06-20T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz03PBoKb7o5e-DQc94AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyKLpdVEwsfCy9_ub94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy5056FSX-HpHfjUzZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzvYu-WWi6NHALNZj54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzi1YhUl4GaXWyxPvp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgytYiZrq8QuOoCUzaR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgztnGrDzOrH6imQ9nJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxlUrWkXMcAClDQmTR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxnL2W-Qw6uOXZwJ4d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzAguT1KzfC57ez8Lh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"}
]