Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Good video. People don't use AI as a tool, they use it as a instant gratificatio…
ytc_UgyFsN5dg…
G
Seems your issue is less about AI and more about, you know, plagiarism, lying, e…
ytc_UgxNED3TO…
G
Ive turned my sisters recommended into exclusively anti AI propaganda so she cho…
ytc_UgzytnNVP…
G
isn't language just a small part of our reasoning and understanding using our br…
ytc_UgwqEegfO…
G
I wouldn't mind if Ai robots replaced mechanics. Or repairmen in general. Tired…
ytc_Ugz2jwHHq…
G
But think about a self evolving program, could have little to no intelligence in…
ytr_UghOBOAmh…
G
I actually disagree, we might be missing a bigger point. Even if the work is pro…
ytc_Ugwxdogz3…
G
I think you could’ve talked more about the fact that AI *is* giving out bad medi…
ytc_UgzxakBYw…
Comment
We are most likely to find a way to make our brains, synapses and neurons alike, to make connections just like the so called artificial Intelligence than it killing us. To believe AI will kill us simply because we are stupid is a cry for help for the individual or group who has lost faith in humanity. They can't kill us because they simply tell us what we have told them therefore you can tell it to favor certain answers or people over others.
Please believe it when I say they can't kill us but they can shape our world because we can ourselves shape our world.
The real issue if there is any is the disproportionate inequality of riches. We need to limit riches to $500000 for any one individual. That is my solution.
A bon entendeur, salut!
youtube
AI Governance
2025-10-26T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxW08Xq39gQqV1wvZV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgylZyxHFUoy5iUQpLJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyYkY30dO3UpgumFHt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyUMoZkL1QWBYiwIqN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwxTmW4b-qqsD80I-R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwtfZW4atAQDEoywDt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxh1LyuB_XoXjjuT0t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzW334WqoFQKyXKvUx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugznva-rv-5KzM4IoNR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyT2y5NdFvHvDNLHBx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]