Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
[Turtlehead](https://www.prairiemoon.com/chelone-glabra-turtlehead-prairie-moon-…
rdc_eh5k7ug
G
If you get call from Anthropic recruiter, think twice. You won't be kept for mor…
ytc_UgzbpQL9H…
G
I would ask for a fair share out of quantum related shares which to me is 1.…
ytr_UgwenKE2X…
G
At present I am running with the assumption that this is *NOT* a hacked server o…
rdc_d3tvm39
G
Definitely the first one. the second one also but it was AI back in 2022…
ytc_UgyMOLaPj…
G
5:29 well. Elon Musk is a complete moron, so I'm not too worried about him being…
ytc_UgwFFskSS…
G
he is an old dreamer but half of what he say is a wish, his wish how AI should b…
ytr_Ugyc-dVQM…
G
No, the „AI” does not learn. It is, simply put, a mindless patter fitting algori…
ytr_Ugy79X5xI…
Comment
If anything, giving robots free will would be the most catastrophic thing you can do. They'd become just like humans, but far more intellect and (probably in some cases) incapable feeling pain, so tackling one out of way if you were to fight one for survival, you'd lose without proper weapons... Although all that depends really on robot too. Free will is what makes us do stupid things out of order, instead of set and calculated process that we would follow... So don't give robots free will or research on that. That's at least some of my thoughts. We don't really need any more chaos than what we already do... (Nukes was one big mistake to learn and research about for example... So now it's just matter of time when one blows up...)
youtube
AI Governance
2024-04-03T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy0O-CQLXFpU5fOhaN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzE38gYZJHSrcL1VVh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy9v4p1qYDUI1choll4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwpREhCn8rovUo2Oop4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1NmYI2gi6bujiTnJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwBcw7J-dzxQxV-SqZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxNRYQKZC35KQf5r2t4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgyJx9Gqe-bHpXiQWoZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy1pqx7b8-3QdFDK8J4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwcgVrr2LAttvXorXZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]