Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
>Ring spokesperson Emma Daniels told The Verge that Search Party is designed …
rdc_o4whc8v
G
Is this a show? I mean, is this really an AI bot conversation or is this a scre…
ytc_UgzCad96d…
G
AI isn't that bad if its not used for bad stuff, like claiming to be an 'artist…
ytr_UgyFwseTr…
G
I would only ever drive in a Waymo for 30 minutes max. I don’t trust it to drive…
ytc_UgzRWg8t1…
G
There is a difference between wanting to stop the progress and wanting AI to be …
ytr_UgyoNlOkG…
G
Thanks for your lovely comment! Sophia really does embody the balance between wi…
ytr_Ugw-Pq-pl…
G
Everybody’s ChatGPT is unique to their own personality because the program is tr…
ytr_UgztW99RK…
G
We all know the world is headed there! And there are in fact benefits from surve…
ytc_UgwzmNm2r…
Comment
Bengio's talk is a balanced and needed warning: AI is not about performance improvements or intelligent applications, but about risks, specifically systemic risks. What I appreciate is his definition of mitigations—not as idealistic solutions but as pragmatic actions (policy, safety, cooperation). One thing I wish he explored more: how do we share risk fairly? Too frequently, the communities most negatively affected by abuse or accident are given least voice on AI regulation. Also, there is conflict between how quickly innovation occurs and safety precautions; he refers to it, but trade-offs require more public discussion. Overall: a useful reminder that "progress" without responsibility is dangerous.
youtube
AI Responsibility
2025-09-14T08:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzlysxDiUdenqIwM4x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7mLly08D68nqo8QJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzz_fsaD9xaHarVSiZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwaaLpFqdybjYc80MR4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxz-RDT1ONCwrxRNdt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyXlYUdzwaj97-UpJB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzew6YgnjtIuYEcggx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyhdqRoDguF4_OTzON4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy29IQsONSZQ9_0-Kl4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzGbCePyvO5_ohynT14AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"}
]