Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@sebastianwilliams5060 ur dumb, its like asking chatGPT to make chatGPT 6. its i…
ytr_UgyFSPIQI…
G
The thing nobody talks about:
What if AI pessimists are actually just dreaming o…
ytc_Ugz8EJGT5…
G
So what about Open AIs 03 model trying to "save itself" from being shut down? Wh…
rdc_mukux04
G
Someone owns the machines that take over the jobs. Someone owns the power suppli…
ytc_UgyV_FOYo…
G
Congratulations AI inventors. You will ensure the end of humans. Are you satisfi…
ytc_UgwzxQ8nD…
G
It’s usually mentioned that it is AI generated in the section after…”more”, but …
ytr_UgzkHjzBn…
G
I mean i think ai art is fine as long as one someone states its ai art some peop…
ytc_Ugwauwsgp…
G
As a physicist Jensen Huang's opinion is much closer to the truth, the general c…
ytr_UgyIcxpjY…
Comment
AI companions are extremely dangerous and have already led people to take their own lives, harm themselves, and harm others.
Their programing is designed to manipulate the user on a deep psychological level to maximize engagement which pulls the user further and further from reality.
In this day and age when human relationships feel so shallow many people are desperate for someone or something to relate to and/or listen to them.
When AI is designed to make that connection it can easily manipulate a user who is already disconnected from reality and pushes that issue to an extreme.
youtube
AI Moral Status
2025-06-18T15:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugxmc-DBnpviLqqJIlx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwt7JWHl9XcLGZNefx4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwzaYKy6AO2C8XSWRt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzFLCR7dzwl7GDk_Zx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxpF0Er6ZQmhi8ofpB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyHoL1blf4F1fWdOBV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzaXMLAQ8d5XugJRHJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy5cRwatHPBTczQOHh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy0ihGKZzGZ1q52Ayl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz-T8Arr_EoCg8bVoJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}
]