Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This video seems to be talking about AI of 2023 and failed to mention that AI is…
ytc_Ugx-nT6CO…
G
Wow, 120 AI tools! Makes me wish I had Pneumatic Workflow back in my last projec…
ytc_UgwtinR8k…
G
@ClayV314 Christ dude he's a chatbot, he has no drone army and he literally does…
ytr_UgxFtNOP2…
G
If you watch the video you can see another white car swerve to the left that was…
ytc_UgzzuplgY…
G
AI art will never replace Artists because if you think about it, AI had to copy …
ytc_UgxdxWt8X…
G
If she really had a problem with what the company was doing, she would have left…
ytc_Ugy5kS4ia…
G
The actual reason is because AI (only recognisable in non-agentic models) is tra…
ytc_Ugy9AboFU…
G
I think the main thing to remember about current popular LLMs is that they are d…
ytc_UgysKtHak…
Comment
@jordangoodman4769 I dont suggest that it is a lost cause. I was thinking of a different angle. My angle was that AI cant perform value judgements, only logical judgements. Whatever the moral environment might be, the AI program will be expected to mind it to be viewed as "unbiased." A logically unbiased AI may very well be viewed to have a moral bias, if the results differ from the beliefs of whoever is judging.
For instance, suppose an AI were to learn that some particular political issue was factually non-existent. All those championing that particular cause would certainly judge it as biased and seek modifications. Suppose further that the AI was correct. The AI would then be biased towards factual analysis, which is still a bias! People may or may not value such bias. I dont see how you can say there would be no bias, tho.
Think of the difference between equal opportunity vs. equal outcome. If you were to choose one as unbiased, those who value the other will disagree vehemently.
youtube
AI Harm Incident
2019-12-14T12:2…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugx5kkOCD30h7jqlgBN4AaABAg.92dPPRY-ZED92f0h9twDv9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytr_Ugx5kkOCD30h7jqlgBN4AaABAg.92dPPRY-ZED92gcbrImR7L","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytr_UgyolKZJVkQldyGTCjh4AaABAg.92W2ZbOlzet92Y4q754q5T","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxUApOF7e4-_2k2ULF4AaABAg.92Vs9WY_tkq92f-G7NgOEj","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgyPOlPNwM-2GiW8ZTl4AaABAg.92VhVL_aDEy92_f6jkiiJP","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyPOlPNwM-2GiW8ZTl4AaABAg.92VhVL_aDEy92fcrDocEJU","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyPOlPNwM-2GiW8ZTl4AaABAg.92VhVL_aDEy994FsfQemTw","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyoQAW2FGOd4ZnnPOF4AaABAg.92VgEqZhfAf92WiwkpQQM_","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgyoQAW2FGOd4ZnnPOF4AaABAg.92VgEqZhfAf92Wt46RwLOP","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwVaWZFJdXYJbP0sBZ4AaABAg.92VYQYx5mz592WQcPa1WWx","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]