Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Neils wrong and out of touch of reality again. People aren't ready for self driv…
ytc_Ugz_KsJrW…
G
Yeah if they could, I'd bet Marvel would definitely be using AI for the next X-M…
rdc_k9h8ec5
G
Imagine racking up huge amount of debt just to go to college then getting replac…
ytc_UgykweBgy…
G
I wish this topic would have gone on for 30 minutes more. AI will eventually hav…
ytc_UgwYZy_Nf…
G
the presenter of this robots is also a robot if you pay attention on his fingers…
ytc_UgxTG63kt…
G
>and now internships / junior dev positions have to contend with automation a…
rdc_j6gshev
G
I'm fully expecting to be replaced by AI in my current job in the next year or s…
ytc_UgykZ4iAf…
G
i think ai is a fun tool to mess around with, rather than use seriously like mos…
ytc_Ugy8a7b2A…
Comment
Wdym it "won't"? what about the AI alignment problem? AI governance gaps? bullshit. I don't think we should just defer talking about existential issues, as well. The "right time" may be too late. The problems she identified are interesting and important, and we shouldn't be telling people that they're going to destroy the world, but we shouldn't push aside other important problems for these...
youtube
AI Responsibility
2025-01-05T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw0qVgaz67NR_EHNQZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgxO9Hx0zZAQo0ued4x4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz7jb_TUkGILDANhll4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz1hfKZyzjBSYS1y4B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"disapproval"},
{"id":"ytc_UgxrhR48AgG7XP7GzTN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzIGkzWk856XM4_iO54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwPDIRoxDdM6dMmFOd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwSXx2FvyubnePpsJd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwTR-15F4sxDPAWn4p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwLBupkdfmixBwrFSZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]