Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And people still ask me why don’t you like AI and robots?!? because you’re going…
ytc_UgyapbBfO…
G
Its difficult to imagine that chatgpt isn't actually having a conversation. The…
ytc_Ugyuea9SW…
G
these boys could have a bright future doing AI stress testing for DARPA or somet…
ytc_UgzSFs-XA…
G
I don’t use restaurants that have self serve kiosks let alone an ai doing the ta…
ytc_UgyV0zYkD…
G
Until one of these Mfs make an ai program that is in no means connected to the i…
ytc_UgwmT-e0S…
G
The crisis is due to the dollar losing its reserve currency status NOT due to AI…
ytc_Ugx1xHZ96…
G
Let's see if Aurora's in-charge can put his money where his mouth is. Have his f…
ytc_UgwuptTBH…
G
you're talking about models using generative AI, not AI in general. AI has alrea…
ytr_UgzC9AIUx…
Comment
Those aren't R1. The 1.5B and the 7B are finetunes with R1 data but based on Qwen Math if I'm not mistaken, so they are for mathematics, and the other two are based on normal Qwen models as well. The Actual R1 that is usually being said as as good or better than ChatGPT is the 600B+ model...
reddit
AI Moral Status
1737833698.0
♥ 11
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_m967qai","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_m94ba1f","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_m95dgnh","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"rdc_m953yyo","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_m94f9d2","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}]