Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai do not think they function on set of commandments pre-programmed by the human…
ytc_Ugy0Bg7i1…
G
I just asked ChatGPT to summarize an article on DailyMail that is behind paywall…
ytc_Ugzcjvooi…
G
easy solution: stop using applied demonology.
"but that would make our lives so …
ytc_UgxRJQ8ka…
G
Wouldnt it be great if a.i could see the benefit of a clan based human race. Liv…
ytc_UgzyTrDFe…
G
You realize, of course, that all the “improvements” artists are posting based on…
ytc_UgyM97Htm…
G
Techbros=Syndrome
"When everyone is super, no one will be."
They're trying to m…
ytc_UgwFP1UlN…
G
I don't think we're looking at it from the right perspective. We know that LLMs …
ytc_UgzwvO_t6…
G
I hope AI goes away forever, with the sole exception of medical research.
In r…
ytc_UgzrqpVF3…
Comment
That's not the kind of alignment he's talking about.
A "corporate-slave-AGI" you're thinking of is a benign scenario compared to the default one we're currently heading towards, which is an agentic AI that poses an existential threat because it doesn't understand the intent behind the goals its given.
[Intro to AI Safety, Remastered](https://www.youtube.com/watch?v=pYXy-A4siMw&t=2s)
reddit
AI Moral Status
1738009315.0
♥ 37
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_m9j33ec","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_m9i4odk","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"rdc_m9im9g4","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"rdc_m9jphet","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},{"id":"rdc_m9ihrce","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]