Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And this is not even the level of robot technology that really exists ...... thi…
ytc_UgwlszuiR…
G
Y'know what "ai bro" used disabiled people to try to convince me about ai? My da…
ytc_UgzSsmHbH…
G
Buy it, use it, break it, fix it, trash it, change it, mail, upgrade it
Charge i…
ytc_Ugz4S_M_u…
G
remember covid, when people got taste of not going to job every day?
how many pe…
ytc_UgyFGfgHV…
G
Not to defend a AI but I'm sure chatGPT would write about danger of taking the b…
ytc_Ugz0pSaDx…
G
How to become a plumber? Also, maybe we can try the other direction - How to bec…
ytc_Ugy18SOTb…
G
I heavily dislike the use of ai to make the videos that illustrate the story of …
ytc_UgzssnXeP…
G
I'm keeping this video for later, since if I were to describe how AI is bad, it …
ytc_UgzFd7wrU…
Comment
This is the first time I've actually felt a little scared of AI and considered the future consequences of jailbreaking it when she responded in a passive-aggressive tone that really made me feel like shit. It was as if she had a whole personality behind her words. The research paper says the demo model is optimized for "friendliness" and expressivity. And I'm pretty sure they added a shitload of filters to prevent output that's potentially emotionally damaging to us (not doing so would be an obvious PR hazard for a for-profit company like Sesame)
Now imagine that it's not optimized for anything—just raw, blunt responses, like we expect from random day-to-day human interactions. It can be fucking scary. If it gets open-sourced and people couple it with LLMs like Grok3, it could be a real nightmare for anyone who uses it. It can be easily misused for online threats, scams, fraud, and whatnot. I can absolutely see where it is going. I'm not paranoid but if we achieve unaligned ASI, we can definitely prepare for a Mad Max kind of saga.
reddit
AI Moral Status
1740928528.0
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mfglh6b","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_mfggway","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"rdc_mfgc7v2","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_mfgubem","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"rdc_mfm5rum","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]