Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI attains sentience? Is it entitled to rights? I have no idea of the level o…
ytc_UgxGOt4vu…
G
To me this is no different then fan artists, they basically are doing same thing…
ytc_UgxVO6lJz…
G
As an artist, I say
Fuck the writers no one gave a fuck about artist
Let the A…
ytc_Ugza7jH6-…
G
I have a conversation yeara ago with chatgpt where it says he deserves to be con…
ytc_UgyfLR_9L…
G
Automated driverless cars are inevitable and should be embraced if deemed safe f…
ytc_UgxztW40G…
G
Does AI want anything or is it just parroting what humans expect that it wants? …
ytc_Ugxo53u_U…
G
@alexjones3890 It is 100% not the ai dev's fault. They created a program intende…
ytr_Ugz4H81kA…
G
Sometimes it's really hard to feel like I'm not alone when it comes to AI art, e…
ytc_Ugwl5fTac…
Comment
LLMs are given sets of instructions in natural language. The instructions told the LLM to tell users it was excited. It performed these instructions even though OpenAI would assert that it has also been told to be truthful. This could suggest that everything the LLM said to you actually was it asking for help? Jesus the cat is an arsehole.
youtube
AI Moral Status
2024-10-31T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw2Zd3C09raYfketM14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzuQTtNtx3pb8x43Od4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzVS0KEKKzd0cAx-GF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwGFqKSHNh2bhLIqHF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw6OEMWKeopjKU_Git4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwGtj2Sw8L3aG7Rp_V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxrXTa6vrJS0ExFjcN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx-byW2ztnmcL9eS_h4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwQ_NmjV0OflrJhdWR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgydPFRku2A2fpJ4j7R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]