Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI in general takes a boat load of electricity. How does dealing with the interf…
ytr_UgyaG5iKv…
G
exactly. they object to ai on the grounds of religious fanaticism instead of log…
ytr_UgwgdZRD5…
G
Many of these are not connected to any real customer service people, it’s only t…
rdc_n0mssjl
G
I saw an argument that the prompter is the artist and the ai is their pen, penci…
ytc_UgxYN2tU-…
G
ChatGTP is only an algorithm that is made to respond to any question in a human …
ytc_UgyVPiFsx…
G
Charles Graeber _"AI is an existential threat to writers"_
I agree. What Charle…
ytc_Ugwjb4QoB…
G
I set up secure automations with Pneumatic Workflow, which prevents misuse and k…
ytc_UgwweR8ZH…
G
Counter argument: Your point is entirely emotional and based on what you think a…
ytc_UgyNuQqiQ…
Comment
I appreciate the honesty and openness. But many AI pioneers have been incredibly naive about AI's existential threat to the human species. While some deniers, like Yann LeCun, seem determined to bury their heads in the sand, most of the reasonable and thoughtful experts in the field have been vocal about their concerns. Yoshua Bengio and Geoffrey Hinton, for example, have both had a "conveniently" late epiphany about the impending danger of AI. Good for them. I just wish they'd been less naive and more outspoken from the get-go. Their wake-up call may come too late to save humanity.
youtube
AI Responsibility
2025-03-16T01:3…
♥ 26
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzRtJ-z9QeLE3pdaOh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},{"id":"ytc_Ugz3VjwBg3xyeK1NodV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgyOJIzZQk9jeN1CGVp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgxpKF0S9i-xbNTvsqJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgzGNlgaDZkAh9-THuB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxJZ1QIv8h94asnpI14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwvjBtlbF799N-ZgsJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgyhsNctZGhjBP2ugVl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgxmkvtiZqMLjypyVkd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgxA-M7L5gXS8Sj1Hkt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"outrage"}]