Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To me it seems these AI "artists" always wanted to be artist but where limited b…
ytc_Ugx2qw7zh…
G
You NEED to be able to create new viruses using AI. Scientists are actively eng…
ytc_UgyN0py9l…
G
I understand what governments can do with AI and how one can be told how much ea…
ytc_UgwBrbIrV…
G
For anyone curious about AI relationships, check out my song 'AI Love is Real' b…
ytc_Ugx_bkLDo…
G
Claude will replace Fivr for coding, small language models will do taxes, summar…
ytc_UgzPH8Y9t…
G
I’m not too worried about AI art FULLY replacing artists because there are alway…
ytc_UgxfHvxqa…
G
@Sam264-n2oInspiration and copying is different things. Actually, if you "take …
ytr_UgxNrXToA…
G
Nobody knows what the future of AI looks like <- true
But we know what it looks…
ytc_Ugz_Il1ce…
Comment
An AI cannot logically be smarter than humanity. It must be programed by humans, it learned from what humans have discovered. The smarted an AI could be, is as smart as all of humanity. AI would have to make it's own discoveries in order to be smarter than humans, and that's not possible since it's method of learning is limited to what we program it to do.
youtube
AI Governance
2025-07-12T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyRo_5YgKx35R_jH_h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzII9VmMAIgrRb3Pvh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxS2vPZvRwu_rTGXcJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzn5zzxXUeO--BHc194AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz8IEhcP0xs00-3-WV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyiWYfhnrjSVCi6Pm94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzcHLoOBg_YE0TrYBF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzVGCS8fjgc3ZfD05x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz5r_XEf48CtZ0SwCV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzMTmZc3RghCo887HF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]