Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
48:39 This is the point I never hear "AI Experts" make.
Everyone who understands…
ytc_Ugx0Rmn23…
G
These animated puppets with programmed responses are not at all in any way a thr…
ytc_UgiaE46Ty…
G
You can be sure American medical insurance companies are already prepping the "C…
ytc_UgzMLKpA5…
G
As a cartoonist and hearing how charlie cooks this AI "Artist" and states it mag…
ytc_Ugy1lUUXl…
G
honestly speaking, these conversations on this podcast around AI just keep getti…
ytc_UgwcBZ2vN…
G
AI is great if you use it, but not if you use it as a guide.…
ytc_UgxUavyUD…
G
'AI SAFETY'
ASSumes an existential crisis is INEVITABLE? 🤥
Because he's one of…
ytc_UgxZrlnxA…
G
Come on man. You gave it a hypothetical situation and said “If you were that a-m…
ytc_UgxB3xp7s…
Comment
Why the almost cute 'piecemeal' approach to the edge cases of harmful AI scenarios like suicide enablement? Seems like a total distraction in the face of the overwhelming cataclysmic consequences of AI that our society is about to face. These harms include loss of employment, loss of truth, loss of reality, loss of meaning, loss of privacy, loss of humanity, loss of control, and loss of life. Humanity is totally unprepared for the tidal wave of dystopian change that AI will bring in the coming decade and these guys are withering on about the specific suicide edge case. How about a more general discussion of the AI enabled, and then AI led, end of our species? Is it just their lack of intelligence (ironically) and imagination that prevents them from addressing the broader scope of the impending disaster?
youtube
2025-10-29T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzjMuhHpeBxgFaMox14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzS7rs4qpPFajDL3xZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzxJltvddOyvDoWLQR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwR_h976aiAY0MUe-54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzeGsA0PUnjwPkSKpB4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxWksEdbSqNeFPlhlB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyPgwpD3bErFDZqKW14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugya4IiESzcDdKtfi-94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyjCmpMQHrAstjjVHd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz-_tgLL9sufjfjjzF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]