Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think you vompletely missed the point of what asmongold said.
He was comparing…
ytc_Ugy3FFoZ7…
G
If I’m remembering right from a previous episode isn’t it illegal to create and …
ytc_UgxYimpbA…
G
Why can’t AI researchers come up with better examples? Guys, you had years to th…
ytc_UgxvmqKzJ…
G
I believe ai is alien technology. You can’t tell me a filthy human created it.…
ytc_Ugxys-zcq…
G
Well
I agree that AI has gone too far. I agree that we need to distinguish betwe…
ytc_Ugy5P_mDi…
G
Unfortunately, what's the answer to that going to be? Probably autonomous system…
rdc_ic18bj1
G
Obviously the number of humans alive in 20 years will be like 10% of we have now…
ytc_Ugx3A1ki4…
G
AI doesn't need to go Terminator on us to wipe us out, I think it's going to be …
ytc_UgwzBGXLY…
Comment
One of the tasks that AI is pretty decent at is taking notes from meetings held over Zoom/Meet/Teams. If you feed it a transcript of a meeting, it’ll *fairly* reliably produce a *fairly* accurate summary of what was discussed. Maybe 80-95% accurate 80-95% of the time.
However, the dangerous thing is that 5-20% of the time, it just makes shit up, even in a scenario where you’ve fed it a transcript, and it absolutely takes a human who was in the meeting and remembers what was said to review the summary and say, “hold up.”
Now, obviously meeting notes aren’t typically a high stakes applications, and a little bit of invented bullshit isn’t gonna typically ruin the world. But in my experience, somewhere between 5-20% of what *any* LLM produces is bullshit, and they’re being used for way more consequential things than taking meeting notes.
If I were Sam Altman or similar, this is all I’d be focusing on. Figuring out how to build a LLM that didn’t bullshit, or at least knew when it was bullshitting and could self-ID the shit it made up.
reddit
AI Responsibility
1755609928.0
♥ 73
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n9hzee8","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"rdc_n9ig08d","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_n9ixia5","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_n9kka6l","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_n9jts9g","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}
]