Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The answer to this AI problem is to have a robust redistribution of profits of c…
ytc_UgyJc9Ovn…
G
@awesome2259lol assumption 1: these parents didn’t know how depressed their kid …
ytr_UgxvwgH3_…
G
I’m really hoping there’s some actual teacher face time as our students’ minds h…
ytc_Ugw-yFHIS…
G
The “trick” is to use hash map.
If you’ve seen this before, it pretty straight …
rdc_hik4knd
G
At this point a robot is better at speaking like a Person then the us president …
ytc_UgyHEumZK…
G
Evidence: some people think Eliezer Yudkowsky has any actual knowledge about AI …
ytr_Ugx65e8Gm…
G
I'm having trouble deciding between a Bachelor's degree in Computer Science, Dat…
ytr_Ugy0hMcXp…
G
Shad is helping *no one* but himself. Yes, AI "art" is a reality and is here to …
ytc_UgxxbuGjD…
Comment
SuperAI waking up one morning and plotting genocide? Really, that’s the best we come up with (no offense, just a rhetorical jab)? The odds of that are slim i.m.o. The current academic debate spans far richer scenarios: AI as a bumbling bureaucrat drowning us in optimization errors, or as a cold power-seeker bulldozing us with the indifference we show ants. The most immediate risk, though, is humans weaponizing AI for surveillance, manipulation, and profit. In the end, the biggest threat in the equation isn’t the machine at all, it’s still the human factor. Nevertheless I was very entertained by this conversation, thumbs up keep going 👏
youtube
AI Governance
2025-09-05T20:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzC3xDDiVTS1teWTpp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzZAovwcJ-o_Qr9-jd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_Mf3Ke7o7fjuYZad4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyE49vvBljl92hFfXF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw59ynf1zrOUBZPMDd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwjrM2TzgLPg6X_7al4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy7mbgBTGwdTKVC4qN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRH5BKtJIgc2-xPOJ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwohdaZyLjU-vm1a8Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzf6Ik9RvnJHsvc21Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]