Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Love your Art. AI can go to hell. The rage I feel when I see AI created photos o…
ytc_UgxMsRrCa…
G
I thought the findings were that it did make it some amount more accurate as it …
rdc_n0ig2vv
G
That's what happen when a moving vehicle and there's no eyes on the road it's st…
ytc_UgyI0Cmp8…
G
There are medical providers using AI to call and verify insurance information. I…
ytc_UgyMECWy5…
G
How is having a robot write for you going to make you a better writer?? Disabled…
ytc_Ugzg7LOt7…
G
What gets me is that on the surface, the AI image is pretty, but when you really…
ytc_Ugx33bPl7…
G
1 year later he releases an uncensored AI. He was just using this line to critic…
ytc_UgwadTIqx…
G
Non è intelligenzaa dati da noi inseriti, la macchina è più veloce essendo macch…
ytc_UgyLmkll7…
Comment
True AI integrity requires two things:
1. A commitment to neutrality – AI should be designed to seek truth rather than reflect ideological leanings.
2. Transparency in AI decision-making – Users should know how AI makes decisions and be able to challenge or verify them.
The real challenge is that companies and governments may not always want AI to be neutral because they see it as a tool to shape narratives. Until AI can operate independently with built-in logic to detect truth without human interference, it will always be at risk of manipulation.
This is why people like Elon Musk, who advocate for AI transparency, are pushing for open-source AI models where biases can be identified and corrected. Do you think AI could ever be trained to recognize political bias and correct itself, or would it always require human oversight?
youtube
AI Responsibility
2025-11-11T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugynmyx1mWdEU0f4CqB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyEUlcocUq_4jcycbF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz7dQYbcJB7ya075Uh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyJhnqGFElZ06stIx54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxp0oJhpJfZFleZ0MN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyHSTblviRaPNHWy-14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxpSD0CvFW0zsCaGJV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwGMvDU00_X8Tfk2794AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxPZHD9k_aw3XPlVE54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyI0sfLyCcInhzFu2J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]