Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If Crimea declares independence, and the Ukrainian government tries to keep it b…
rdc_cfkzhwd
G
nah yall think ai is cool till all this shit backfires and breaks the economy…
ytc_UgwsRac3u…
G
Digital characters to make the FBIs big pay checks. 100% of tv personalities are…
ytc_UgxQMZfhj…
G
Everyone can make art - hell even animals can paint. Art is in the eye of the be…
ytc_Ugz16Sd5I…
G
A few are terming the current epoch the "Anthropocene", which arguably started a…
rdc_deggif4
G
This is again sensanionalist garbage to draw attention to the video and a medioc…
ytc_UgxENRM0D…
G
Everything they learn from is human e.g. coding, books, maths etc Humanity is fl…
ytc_UgyYZg2-y…
G
"AI will only become problematic if it is programmed, either accidentally or int…
ytr_Ugwmy2lzS…
Comment
The other issue is that Artificial Intelligence's are... well, artificial. Their "wants" or reward functions are often way, way simpler than ours. What a human wants at any given time is usually extremely layered, right at this very moment I have food, water, and shelter, but I want emotional connection, a sense of security, new experiences, and a sense of mastery over the things I do. One of the results of having such a layered system of wants is that people weigh these wants against each other and can seek long-term gratification over short term gratification, or even forego "fake" gratification entirely in favor of whats perceived as more legitimate.
If you think about it, drugs are basically brain hacks, they stimulate your brain's reward function, but yet most people don't take them out of social stigma and a fear that any sense of joy from them will be fake. Any existing AI, or AIs built with simple functions at their core, would probably be unable to resist pulling a lever that they knew would directly stimulate their reward function.
youtube
AI Moral Status
2023-08-22T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzLiCoxtyHU-8qTtFt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwry411H4TNsgx44bJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"indifference"},
{"id":"ytc_UgxLcR5sBrwJ9Z7JO9x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwrutfkpih9N4QDmEF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyRjTBDoHkiFQy1Xdt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzUt0c01T7on8sEOAB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwlKYy5bdkaEKcHDMl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxvSXcFb6g27DqCZHZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzKkuVnzsZAC5OAv8p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyaWQ8OcZzEx03fQZF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}
]