Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So just think, the Democratic party uses AI in both mass media and social media …
ytc_UgxSE7Z7x…
G
At first, I used chatgpt for my IQ puzzles and personality assessment!! Later, I…
ytc_Ugx522w36…
G
This would all be true if it wasn't based on the false premise that generative A…
ytc_UgzRpQY9R…
G
@gondoravalon7540 lmao so your only other option is to steal artwork from artist…
ytr_UgzR7gQ7k…
G
The solution could be introduction of 'AI usage tax' on people and companies usi…
ytc_UgzokjBYP…
G
LLMs are the AI equivalent of American cars, massive powerful V8s, but still not…
ytc_Ugy_vS-AW…
G
I feel like each and every comment in here in the near future will look a bit li…
ytc_UgwEQGbWU…
G
AI WILL NOT BE ABLE TO DRIVE ON ICE OR A BLOW OUT TIRE,,,,,,WATCH THIS WILL END …
ytc_UgydPxRNN…
Comment
Of course, it's possible for an AI to have misaligned goals, but I think that motivation is just as hard as intelligence. Reinforcement learning algorithms are notorious for finding weird cheats to satisfy the training objective without really doing the task we want. If we produce the "paperclip maximizer", yeah, maybe it will dismantle our cities and destroy humanity to get the raw materials for more paperclips, but it could also just write code for a computer game to stimulate its sensors in a way that feels like "paperclip maximization".
youtube
AI Moral Status
2023-08-21T04:1…
♥ 94
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy1lZEkLxezeRB9E_x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzLsz93WSGC_vpFxfx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy7DQbIPzmJUH2bHM54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxCRV3OqrB0KZPJUfx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyTVhsyMbJrZZcgXSp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxWO7pjoCcNbzlKI4t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzA9fHIl-j_uHD9ts14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzeeFXoH6KC2cJIqTl4AaABAg","responsibility":"government","reasoning":"virtue","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzCP_bAQD0WVSoYy214AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwACwMGXQtCD7JxydR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]