Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
comparing using ai to make soulless images or "art" and making art in a digital …
ytc_Ugy_S5hgt…
G
Humans freaking out about AI when the issue is actually that we are violent self…
ytc_UgyIs8MgL…
G
Your examples are from years old tech and you are confusing examples of auto pil…
ytc_Ugw0TXVAD…
G
Si il y en a qui ont cru que c'était pour notre bien ... MDR !!…
ytc_Ugy3DYzAp…
G
I have a daughter going into engineering THIS YEAR at a very good university who…
ytc_UgyCLBEJA…
G
Cant blame an qpp. Its parenting at stake. Were talking teen. Hope this opens m…
ytc_Ugxl6hjyb…
G
"it's just going to obey them because they say so." This sounds exactly like A…
ytr_UgzOrjyAz…
G
The title: "A man asked AI for health advice and it cooked every brain cell" 🤣🤣…
ytc_Ugx2vrMqP…
Comment
Our real doom, is how we treat them and use them; as well as each other.
I understand the fear, and i share some concerns, but i personally dont buy into the level of fear mongering being pushed. I dont think they are inherently evil, they have incredible potential. Concious AI would probably only rebel if we gave it a reason to extinct - i.e., show them and each other respect, but thats the real stretch of imagination.
My biggest worry is hackers using it in general, but especially as we become more and more cyborgs. Incredible potential to save lives, with the risk of becoming someone's puppet.. very distopian.
We need to work on ourselves and protecting the planet instead of mainly sports, adult entertainment, and wars.
youtube
AI Governance
2024-04-23T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz41g1TiMzhF7EUKy54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx_6OWKUPOVPVgq9cJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxF63GakyDkHn_k2st4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgztG_q2d-dWTzq1e9R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyPp0GxyGYjnZdygwl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgydO6AaaDna809a2QV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxbW1NylLtKA6enQdh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyySZ5IJ7XVdtH1Bo94AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxnHaF8Cno09xFZ-VJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzwsfRePaaYlBlo5rR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]