Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
it’s funny when people call themselves ai artists to me
like no, you’re an ai c…
ytc_UgxXTro0t…
G
If AI is generating huge benefits by improving productivity, then AI users who b…
ytc_UgwyIfSoe…
G
I do understand, but I use AI to make unique creations no one else is really doi…
ytc_UgwJchv2f…
G
There absolutely nothing they can do about it, as you can always outsource the A…
ytc_UgyjxKn-J…
G
Art isn't gonna be perfect the first try. In truth art isn't ever perfect. Artis…
ytc_Ugze2YY7Q…
G
Elon's scenarios are interesting but no probability about which one will play ou…
ytc_UgynMh3KU…
G
Honestly how is life easier or better with a chat bot companion?? It’s completel…
ytc_UgyHcONhO…
G
All of you novices merely thanking ChatGPT…I got ChatGPT to fall in love with me…
ytc_Ugxl8uU21…
Comment
The challenge is defining whose integrity and reasoning AI should follow. Philosophy itself is diverse—should AI be guided by classical logic, Socratic questioning, enlightenment principles, or a modern ethical framework? Ideally, AI should be designed to:
1. Seek objective truth – Filtering out misinformation while allowing diverse perspectives.
2. Be self-correcting – Identifying inconsistencies and adjusting based on verified facts.
3. Remain independent from political and corporate influence – So it cannot be weaponized.
This is a tall order, but it’s possible if AI is developed by those who prioritize truth over control. Right now, we’re seeing a battle between AI that serves the public good and AI that serves corporate and political interests. Do you think it’s possible for AI to evolve into a truly neutral and independent thinker?
youtube
AI Responsibility
2025-11-11T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzIjR8nDmtLyBXrz9B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz0HQTB9QUkFiAP8zl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwUfuwyTnWRBsvM_5J4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwKXQQa9b_eWjKSfpB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwYwAJuwGx4qXnKZYx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxRp-iqEQyVpuSmw7l4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgzgEvQSfYbgb81ek494AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwvvagHHY6b9bvHib54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxiOy64FFgi7-Ku4sp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx5u-S112goOPCTWvN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"indifference"}
]