Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One way I see this bubble bursting is that AI companies have to demonstrate that…
ytc_UgybGxpfV…
G
I think the CEO is hasty and took a spontaneous decision without focusing on the…
rdc_jrsalcj
G
Saying that employers, and not driverless trucks, will hurt drivers is ridiculou…
ytc_UgxMDA2Ud…
G
@Gromic2k that's the point - without artists to steal from AI wouldn't be any go…
ytr_UgzKn0upZ…
G
Not only is AI absolutely violating people's works and rights,
It uses crazy amo…
ytc_UgyxJSUsI…
G
In the end ,we have just get AI to also BUY the crap we produce...so that human …
ytc_UgwB_cBnV…
G
A concern I have that not a lot of people seem to mention is how generative AI w…
ytc_UgwvgDnrz…
G
You guys have to stop putting all software engineers in the same bucket - AI suc…
ytc_UgwklYupz…
Comment
LOL I can dismantle Tyson's reason to not be worried about AGI in one sentence: "Please do my work in way that narrowly outperforms the results that Ted at work is getting. Lets do this all the time because he's an asshole."
unbeknownst to me Ted said the same thing to his AI agent.
Paperclips ensue.
youtube
AI Moral Status
2025-07-31T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxDwJxsviz873aqH-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzMAIbiee_l3jFVEjZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzU5jflk0VRHvPYeDt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwEZnAwT_ngVx1ahIB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwtFThDM9gSq1FbW8R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxSnfxVBB6Jj3nLBuB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzVJw3dmB5dftqfhj54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxqLpIumeTYlfwoiFh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzIlHya3EIHHQRHJaV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw6YXAJHzE0jPyb3gB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}
]