Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Frankly I think the better use for an AI coding assistant would be a verbal sear…
rdc_ndzqour
G
@naolucillerandom5280hope they never get the right, people keep stealing the po…
ytr_UgwaHbL-F…
G
If there are no human jobs, social costs will go through the roof, and the only …
ytc_UgzCaYHeN…
G
I want ai to do my chores for me so I have more time for hobbies, not the other …
ytc_Ugxukfu5e…
G
I think people are really misinterpreting what this means. It means that if you …
ytc_Ugx-bWvYm…
G
"And then the smarter thing comes along."
No the the "Thing" isn't as Smart as …
ytc_UgwlmLgns…
G
US is integrating more and more autonomous AI systems into their military, but I…
ytc_Ugx9sYJgB…
G
We can redesign everything that we do because we made all of the things that are…
ytr_UgzV0owlo…
Comment
If the ones programing AI are angry, and they believe that the greatest problem in the world is that humans are destroying the environment, then AI will conclude that to save the planet, kill all humans.
However, if you tell AI that the greatest problem is to make the environment safe, so all life, including humans can thrive, then AI will help make food safer etc. thereby saving humanity.
youtube
AI Governance
2024-05-23T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxsD5jfyViQLj5sWcZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxGdBSsfowTGGPsUup4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgynXauCH7yq4tn0IcB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyyzV7S70TZzfACRoV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy7VtzT_AtKDu5TUYJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwNNajhdXxW6nnThvh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxc7pgFpg8V46At66d4AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyEpRZhcfAhSOoroDt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyDLw_cLbrle9MHvM14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgydiMB34WilEsB0XS94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}
]