Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I love how Andrew Ng talked about how getting an access to AI can improve not ju…
ytc_UgzP2_J3g…
G
I think she is not here to give us a solution instead she is here to threat us t…
ytc_UgyEE8EPU…
G
I believe in one of the early Jetsons episodes George did have a loaner autonomo…
ytc_UgxH3wirO…
G
13:56 What I hate about this topic is that, my answer to that question are artis…
ytc_Ugzt1tsyB…
G
I used to use ai to talk abt my thoughts about writing essays and learning from …
ytc_UgyoNIzct…
G
Mr. Sivan, we don't have yet an AGI, maybe we never will. We are talking about L…
ytc_UgxIIBD36…
G
That's an interesting observation! Sophia's expressive features do give off a wh…
ytr_Ugz6QskNz…
G
AI bros seem to have no idea how evil they sound. It seems every techbro is like…
ytc_UgzxxVwLE…
Comment
No, the most dangerous thing about AI is it escaping and killing all life on earth to pursue more computing power in order to achieve whatever paperclip maximiser its real goal is.
All that you can imagine that is not total extinction of life on earth (and possibly everywhere else if it goes down the berserker probe route) is all but the most dangerous thing.
There really isn't an in between between "it's safe" and "extinction of all life". The "most dangerous thing is it exploiting our already corrupt and blatantly violent system based on exploiting everything else for selfish goals" ain't nothing more than "business as usual". What you describe is a very efficient tool, not what makes AI dangerous.
The silver lining of AI being used to kill every other human by humans (which is also another problem once terrorist group get their hands on AI you can jailbreak into teaching you how to make super ebola plague like whatever), is that it would be contained on Earth.
AI bring risk to a scale that nobody can even start to contemplate seriously with how ridiculously powerful it has the potential to become.
reddit
AI Governance
1739122080.0
♥ -3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | utilitarian |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mbufgn5","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"rdc_mbwckak","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_mbv9cjn","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_mbvokn3","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mbw76dt","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]