Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Universal Basic Income (UBI): Governments might implement UBI to ensure everyone…
ytc_UgxIl-PQu…
G
I not is not true that u are speaking through real speak but create by AI speak …
ytc_UgzH7UM1l…
G
The Techno fascists have two goals: 1) Improve the dexterity of their robots to …
ytc_UgwV47l4X…
G
Guys this is great content, informative, but 100% speculative. There are extreme…
ytc_UgyESV92A…
G
OMG ! SHe looks , talks and thinks so real ! ..ITS Truly A.I. invasion now. .…
ytc_UgxcEQCR1…
G
I think WhatsApp may have used AI for sending plus posting photos and videos on …
ytc_UgxtV7Pl5…
G
Roger Penrose is right. AI is not intelligent. But not because it doesn't have c…
ytc_UgwTJwsMQ…
G
AI will get 200% more accomplished; I work with black women and they sit around …
ytc_Ugwk38X6A…
Comment
Ultimately, the problem with AI is not that it becomes sentient, but that humans use it in malicious ways. What she's talking about doesn't even take into consideration when the humans using AI WANT it to be biased. You feed it the right keywords and it will say what you want it to say. So, no, it's not just the AI itself that is a potential problem, but the people using it. Like any tool.
youtube
AI Responsibility
2023-11-20T12:3…
♥ 869
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwMOw_UzqM_voH5fkl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxqCu_KbvTezL82r6p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxE67T-0EDajeerP7l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzs-OfettBRbAxwBh54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwZC2VOWCMwxS6YHEd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxeIADcpnlz9Imfc4l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgydxDL530RNzUVcc-t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxoKCUkhWUIzkRiW4N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwiyVFHd4bPCwVesKB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzhKrx4P6GcS6sc_bp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]