Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bro its just not fun if you ai generated something and call it ai “art”.…
ytc_UgyEMvp4r…
G
Everyone is going to pay for being so ignorant, narcissistic and just stupid abo…
ytc_Ugwf3pmn-…
G
Fuck AI art. (Actually, I would use AI to assist me in writing stories. Don't co…
ytc_Ugw8D-dTr…
G
I think it's fine because yes you can make ai art but you can't tell the ai hey …
ytc_Ugz3UyAmz…
G
What sort of people even want to have this crap. Stupid yourselves into non exis…
ytc_Ugy0FNY_N…
G
May I recommend https://en.m.wikipedia.org/wiki/Society_of_Mind
When it was wri…
rdc_j5x1j9o
G
Bs if I can convince gpt I'm God than a.i isn't powerful, il just make an e.m.p …
ytc_UgxkCr0cz…
G
Shit Grok 1 is running and Grok 2 is being created. ChatBPT is running were all …
ytc_UgzE0tFPZ…
Comment
"Why do so many researchers assume that AI control problem is solvable? To the best of our knowledge, there is no evidence for that, no proof. Before embarking on a quest to build a controlled AI, it is important to show that the problem is solvable,” said Dr. Yampolskiy in a press release.
“This, combined with statistics that show the development of AI superintelligence is an almost guaranteed event, show we should be supporting a significant AI safety effort,” he added.
As AI, including superintelligence, can learn, adapt, and act semi-autonomously, it becomes increasingly challenging to ensure its safety, especially as its capabilities grow.
It can be said that superintelligent AI will have a mind of its own. Then how do we control it?
"No wonder many consider this to be the most important problem humanity has ever faced. The outcome could be prosperity or extinction, and the fate of the universe hangs in the balance,” he added.
reddit
AI Governance
1708146611.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_kqt81wm","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_kqsyt6m","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"rdc_kqtsbbu","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"rdc_kr3eiya","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"rdc_kqspw3a","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]