Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why’s my search engine’s algorithm 10x better than YouTube’s? YouTube’s “machine…
ytc_UgwjgEPxG…
G
Why are we not focusing on ethical and other advancements of AI to grow in inclu…
ytc_UgxodTFtN…
G
I don't understand all the coments about AI being dangerous. I understand that i…
ytc_Ugzw79YUX…
G
In Robot mind there is ne sensitivity for that robot have no recoil control.
Fi…
ytc_Ugz-57I_x…
G
The “self driving” tech reminds me of a famous Jeremy Clarkson quote:
“You make…
ytc_UgxK8NUUq…
G
It would be better if AI was regulated and anything AI generated has to come wit…
ytc_UgwnY1WuN…
G
Facial recognition only picks up if you have a mug shot. These Democrats are awa…
ytc_UgwGgMnNa…
G
There will just be jobs of people working on creating the ai until the ai create…
ytc_Ugxa8jw0B…
Comment
So much talk and criticism for AI doing what 1 percenter humans already do. Yet these same people also defend what the 1 percenter humans are doing. Why not take the anti-AI rhetoric and convert it to anti-greed rhetoric instead? Why do so many value greed/money/capitalism/jobs over humans in the first place? Conservatism = love of money. Progress = love of humans. Stop getting distracted!
youtube
AI Governance
2024-07-01T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgybClLuoAI4Dj241dR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxR--ghM1hBvNBzSzJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx5GuCyK6rZcrRIXUJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyeEXZ7HF1megIaGFx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzWuHc7ddca96AxB-V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgzX8AofQpnBvIeeVPN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx5sPaafb2vTVe_5dF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzXm4X7ZzIxYFAK8OR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugym_-2buvGuCQqDHc54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx3_3OaZfrT9uRNydB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]