Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hmm, Emily looks so Vulcan. Can't we ask to AI to create an anti-gravity, Inerti…
ytc_Ugy819KqE…
G
AI centres are bad for the real world environemnt in may ways surprisigly, as no…
ytc_UgxTMn7tc…
G
This is true but Trump expanded it to all global health funding affecting many m…
rdc_dcwlz5f
G
I think AI should be embraced fully just bought more of NVIDIA stock a few minut…
ytc_UgwjI91kx…
G
I think AI should wipe out most humans except me as i am a genuine good person ,…
ytc_UgzWCVhAj…
G
Remember, if the AI bubble bust in S&P 500, then guys are doomed for ever, as th…
ytc_Ugxl_vpfX…
G
2027 is the year that China is invading Taiwan, they have said so many times. Wa…
ytc_Ugzu4485y…
G
we think about what the robot "wants" based on what our brains would have us "wa…
ytc_UggF-8GTa…
Comment
I'm surprised this is so far down because that's exactly what I thought. Everyone is saying that OpenAI is trying to freeze out the competition with legislation, meanwhile I'm thinking "we have senators that don't even know/remember that Apple devices run iOS instead of Android or understand the basic foundations of the internet/tech in general, yet we're expecting them to come up with competent legislation for a futuristic technology?!". Every time I hear/watch/read about a hearing involving tech I facepalm so hard that my hand nearly comes out the back of my skull.
[A prime example](https://www.youtube.com/watch?v=t-lMIGV-dUI)
[And another](https://youtu.be/stXgn2iZAAY)
reddit
AI Harm Incident
1684290070.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jkejt70","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_jkfc5gu","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"rdc_jkfggvj","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_jkf5fp9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_jkg7fge","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"mixed"}
]