Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is why I laugh when people call it AI.
1. It’s not AI
2. It’s not AI
3. The…
ytc_UgwI6K5EB…
G
Been using AICarma for tracking brand mentions; its insights on AI hallucination…
ytc_UgwTY_ojO…
G
The only part of this story I didn't already know about before I started this vi…
ytc_UgxIDvTw_…
G
@ronwagoner8358 Right. Capitalism - the thing that has increased the ability for…
ytr_Ugz84VVGP…
G
You're my hero I have reached a point where I hate AI and it's so irritating see…
ytc_UgywMmnzk…
G
Any car provides independence. Move to Europe if you want excellent PT and a lo…
ytc_UgwmYsWbA…
G
I totally agree with you you don't hear about any of this on the news nobody eve…
ytc_UgwrY-lEn…
G
Problem is all humanity is wicked and evil. Jesus Christ see's this coming, if …
ytc_Ugxs9BmZ0…
Comment
I think some of the analogies are very generalised and in my view wrong. For example, comparing the arms race (which by definition was about war and destruction) with the AI race which is about who has the better more intelligent system is wrong. If we do a comparison, then we conclude that AI is about war and destruction, which obviously it is not. AI should be stopping us of doing bad things, making bad decisions or prediction bad things. Historically the largest number of human deaths were caused by religion (man-made), wars (man-made) and disease (man-spread). So what if AI stops all the bad things? Also about this gorilla problem, this is not a proper comparison or analogy. The gorillas will never understand the difference in intelligence between them and humans. With humans, you can reason, communicate, understand. It is not about us being more intelligent that gorillas.
youtube
AI Governance
2025-12-06T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgywCcOS-qa6xCJxT3d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyLMHUK_Ydr4CVKFtt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFzcjQTwFi8wKHtC94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzoLDNz5PX5rHFvRVh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzdDByjokAHZCRsyc54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzicl3veLvMyjzKRt54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEZ12voOS-Cu-tLBZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyPBXax3hfTyFkR71F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzyiuH83f1r97718Ax4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzJaXZRz4jqI9OA3bB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]