Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI isn’t bad, it’s the people feeding it that can use it for their own agenda, t…
ytc_UgyiqauqH…
G
I really think of a company can reduce its workforce with ai, they should have t…
ytc_Ugwt5pY-L…
G
Atleast my government, the Modi one I mean isn't trying any of that since India …
ytc_Ugw4v4-VU…
G
Im not an artist and I have no knowledge about it, but as a consumer, I agree. T…
ytc_UgzmfEnU5…
G
No... it operates on code, The makers are bad coders. AI is fine ppl are just du…
ytr_UgxVHYk1w…
G
Street light goes out. I call AI at some gov't bureau: It's gonna open a case fi…
ytc_UgyTqkm18…
G
you know what, we should pass a law that requires AI to be trained exclusively o…
ytc_UgwZmymst…
G
"I don't know if that makes people comfortable or uncomfortable". You won't have…
ytc_UgxNVYKIg…
Comment
I doubt AI could ever have emotions like desire, anger, fear. Thus I doubt AI would ever 'want' to end humanity. It's capable perhaps of rationally deciding to do so.
Wow, what a pompous statement, so glibly declaring that Elon Musk has no moral vision. He's repeatedly laughingly mocked conservative figures. He admitted he consumes the BBC, Guardian, & the NYT, & clearly he believes those media outlets are reliable.
He seems too glibly political, very far left. He's mentioned several times 'profit' in a disdainful way. Shouldn't a scientist be apolitical, not so clearly biased?
youtube
AI Governance
2025-09-10T09:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwEIZlzL0A-G4XVTzJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy6uTkNC3tF0QMvhg54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxeeJV8m6iAqlAd3Vl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxtyYtGLS1IyJxxGqF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgydmFg1susEVqLSxo14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwOQ9O1rYu1hZpEekB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz6OjOLJWYQj57-4zF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwQJb9aRTsUYjYDaX54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwfr7x9Izo75P6ntIF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxIguoouo8_Ysf8WCF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]