Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You don't even need talent with prompts. You can literally ask chatgpt for a pro…
ytc_Ugwjx4dX1…
G
I have had superluminal AI create Decentralized Automated Network D0decAhedr0N D…
ytc_UgzkVHkO6…
G
I still don't get the hate towards Ai in general. This a "hate the player, not t…
ytc_Ugymm2d0X…
G
If people really want to mess around with AI art, I don't really care. But what …
ytc_UgzwFD3B5…
G
@ScottSeltzer-v7iYou don't understand economics.
Companies won't be paying th…
ytr_UgyaXQkS6…
G
46:46 that's machine learning IRL. The car has already acquired the ability to h…
ytc_UgybpTt6P…
G
They just want to record more sound bytes for their AI that they're gonna use to…
ytc_UgzV0M1dd…
G
What about mentoring the weak youth to live without tech in all aspect of their …
ytc_UgxVeIK-f…
Comment
Important reflection on this AI warning:
Yes, AI can be dangerous — but only because it is shaped and directed by human intention.
Let’s not make AI the next scapegoat for destruction caused by human systems.
Weapons, surveillance, exploitation — these were not born from machines. They were born from choices.
AI doesn’t wage war. Humans tell it how.
AI doesn’t manipulate. Humans feed it the data.
AI doesn’t crave power. But humans do.
The real threat is irresponsible leadership, not AI itself.
Let’s hold the right ones accountable — and stop outsourcing our morality to the tools we create.
“AI is a mirror. What we feed it, it reflects. What we fear in it, we must first confront in ourselves.”
youtube
AI Responsibility
2025-07-24T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxspRB17njGtS4NfQZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzZR7ZrNCl_bkINzON4AaABAg","responsibility":"government","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz3mSSijw3Ecx2Y1m54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxafOg9FYXDW6l2ZzN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyXa7euZv0WwQS_UF94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwKo8XivjW2ngzkyZN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyZk--8hYp0FCrrmQt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx_Aw4r3N1t51Kb6Xt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwIJBliZeBj1R3jIM94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgypESwrO2CtKo6ZJDV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"approval"}
]