Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Who remembers the guy who made a luffy ai drawing and got roasted that it has an…
ytc_UgyBIDxZV…
G
if we gave a conscious A.I one robot body, how would the alpha character perform…
ytc_UgyKi0YAg…
G
Let me add one of my favorite quotes that was originally for imagination but can…
ytc_UgxqEd0Q7…
G
I'm convinced that the Amulet of Kings.. I mean AI.. holds a power that is easil…
ytc_Ugy5Q3Lq3…
G
Lots of outside time and natural lighting in this school. Really think it’s a gr…
ytc_UgyQkkP9n…
G
I'd almost rather have AI in charge of our nuclear weapons than the transvestite…
ytc_UgxrS8W_O…
G
What? Robot vehicles are road-legal in Texas? That's insane. Is the computer pro…
ytc_UgwAMytQo…
G
its dangeruos guys...she is just a robot and can do what she says..making such h…
ytc_UgxDFhB1O…
Comment
I detect a lot of paranoia when it comes to AI recently. I think AI has its own particular risks but this video doesn’t explain the majority of them at all (eg risk of losing creativity and skills, risk that AI will just sabotage our feelings creating new forms of culture from lyrics to films, or even philosophy and cults or it will engage in romantic relations with humans and so on and on) - respect for wf aside, this video isn’t very good: it only outlines a generic risk for AI to destroy humanity in the most banal way we know it having seen it all in films for decades, without even explaining why and how in particular. There are on the opposite many subtle risks that come with AI that are not even mentioned here- a shame that this fairly good channel chose sensationalism over a honest debate on things
youtube
AI Governance
2023-07-14T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzgFuuwnbPlnRsPXl14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyOgP8RdRvGqY2urD54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxPG3rTY5SMr5pd2X54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzL6MrltpezmW8yQjx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwfA20nPymYqD-tpMx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwFZ5ZhdolIsZXEBuV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwbUfr1o8Mx_zm6HMd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyPiPLJCyfOFLbRSpd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyeOabVIGTRqZp28ct4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxOqgiZKHlqT7DtjQ54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]