Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
At least AI wont be racist like some humans.
AI Egyptologist will not lie and m…
ytc_Ugz68DPQg…
G
We need to create a day z ai wich can switch off a threat ai in case of emergenc…
ytc_Ugwbc_3Z_…
G
Smart human 1: lets make production even more efficient cos we'll make more mone…
ytc_UgzaEwQbo…
G
The school has been around for a while actually. I graduated in 2014 and am now …
ytc_UgxH3d0Dk…
G
It’s like walking. Are you born with the talent to walk? Can only some people wa…
ytc_UgwALhWF5…
G
I believe in emergence. Like decades of engineering brought a car, making it pos…
ytc_UgyNKUe04…
G
@crazydave214 And it's not just a trend either, artists have been against genera…
ytr_UgzdIXehR…
G
Use nightshade and glaze to infect the AI databases, all artists should be using…
ytc_Ugz7Q9Y8M…
Comment
It's funny, because everyone that have tested and used the local models for a while mostly agrees that they are really far behind for example ChatGPT. [A bit of discussion around it](https://www.reddit.com/r/LocalLLaMA/comments/13gpnhq/addressing_the_elephant_in_the_room/)
Part of this misunderstanding comes from various factors, from tests being very one dimensional to using very un-scientific means to compare *(like letting chatgpt rate each model)*, and some trying to find the comparison that shows their pet model in the best light.
So that google paper and the resulting discussions and panics are largely built on a false premise. Still, it's pretty interesting to see how the industry is reacting to this perceived threat.
That said, I really hope open models keep on evolving and becoming better and better, and some day surpass OpenAI's models while still being able to run on normal hardware.
reddit
AI Responsibility
1684316475.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jkjuinl","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"rdc_jki4lv5","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_jkha19r","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_jkgipyl","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_jkiii9s","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}
]