Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Simple leave AI out of law enforcement and hospitals... it's sad to see it behav…
ytc_UgyhcTplN…
G
09:39
"Biological knowledge"? Why does this sound sinister when written by AI?
…
ytc_Ugw3jM19f…
G
@Pebble-j9bI wouldn't put it against him since it feels like this core concept w…
ytr_UgzXbz_BV…
G
the LaMDA does no processing, no "thinking" until you send it text. Then it uses…
ytc_Ugx3-zZxE…
G
I’m sorry… but it will be a LONG TIME before a robot can become a skilled trades…
ytc_Ugwf-41Kq…
G
I am happy that I have lived through the golden age of the years 1980 untill 200…
ytc_Ugz0D7BKw…
G
My favorite thing is that AI was somehow trained on kids playing with stuffed an…
ytc_UgwqskTDo…
G
Who's gonna have money to buy to those enterprises that already switched humans …
ytc_UgyHle1uk…
Comment
Yeah exactly.
Almost every single post about Google or OpenAI trying to capture their position rarely leads with this point and it is THE MOST IMPORTANT ONE. Every single one is "Dude they are just trying to stop progress!" or something along those lines.
Despite Microsoft and OpenAI's obvious motivations - **Can we please at least fucking acknowledge how insanely dangerous these technologies are?** *
We are standing on the precipice of great change. The ushers of this great change are telling people who are about to jump that only they can supply parachutes. This is of course nonsense. The answer isn't to then declare "This is nonsense!" while jumping off without a fucking parachute because you will die. You will splat against the ground travelling at terminal velocity and you will be dead.
*If you don't know why these technologies are so dangerous, you probably need to go and do some real investigating before you leap into a conversation about it because the potential danger is unlike anything we've ever come across in terms of how we think societies work or should work. It is a major threat to everything we think we know about ourselves and if we aren't careful it could cause havoc that we might not be able to walk back from. As Rob Miles said - there is no rule that says it will work out for us. Yes - the technology is going to be hugely beneficial in many, many ways... that will take care of it self, the negative will not take care of itself.
reddit
AI Harm Incident
1684281992.0
♥ 44
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_fvw3b2g","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_fvwggyl","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"rdc_jkfb78i","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"rdc_jkfhmon","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_jkfpcvo","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]