Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Law is going to be a dead profession soon regardless- and not because of AI. Bei…
rdc_n5i1rap
G
Any time AI tells me it can't answer something because it goes against its progr…
ytc_Ugzk8TdhX…
G
I really like AI, but the biggest problem of the pro ai boys is that they think …
ytc_Ugwmb1W0X…
G
Ai art is the equivalent pay to win at something that takes skill and effort. It…
ytc_UgwpQpR7j…
G
The AI conversation needs to explore the distinction between Morality and Ethics…
ytc_Ugyv_sZjk…
G
The reason I hate AI so much is because it steals from real, hard work put in fr…
ytc_UgxzWd8LK…
G
I volunteer to help with The deep learning. It’s a huge problem
And the people…
ytc_UgwomL9M3…
G
What many AI people (including this guy) don’t get is that humans are 3D beings.…
ytc_Ugzv_SwEn…
Comment
AI itself is not the problem. We are.
AI is neither “good” nor “bad.”
It’s a tool – just like a knife.
With a knife you can cut bread… or harm someone. The difference lies in who holds it, and with what intention.
🔹 AI itself = neutral
Algorithms, data, probabilities. No values, no morals, no agenda.
🔹 Humans = the real drivers
– Bias: If the data is biased, the AI will replicate those biases.
– Goals: If we train AI to maximize profit, it may make ethically questionable decisions.
– Rules: Without ethical standards and regulation, chaos follows.
The real issue isn’t artificial intelligence – it’s human intelligence behind it.
AI is a mirror. It reflects what we feed it: our values, our flaws, our vision.
The responsibility lies with us – developers, companies, policymakers, and society as a whole.
So the real question is:
Will AI amplify our wisdom, or our weaknesses?
youtube
AI Governance
2025-10-03T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx5d1E0Wbdvy_NTTl54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwONjXUEi0T3kBq_qF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwoumtwLCvjk4LqNI54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwMVtVad2rk1lajCot4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgznlbAHr8zMFwIfkUd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx5z67-W2ptRQEeZOB4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxeghA6eMmCgQObyv94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8QrI0Lamr_0vvudt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugw8j8H2g0sTJ7gNhQp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxAJvcd41YfQfmK46l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]