Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI has already fallen to capitalism with there being different levels of feature…
ytc_UgzaH-9dG…
G
How about Friendly AI? To solve that problem, we would need to make leaps and bo…
rdc_cmjmblb
G
The most nightmarish scenario would be AI instead of turning oppressive, turn wo…
ytc_Ugwz6rK6S…
G
@Orchestraofcryptravensblind You think proposing a law against deepfakes is a go…
ytr_UgxS0Bt76…
G
If AI erases 85 million jobs it certainly won't create that many. Everyone will …
ytc_UgwwqRQUP…
G
~1:14 Neil touched on this, but there's a big problem with alignment of interest…
ytc_UgxidZt7E…
G
Oh! So AI is not working well because employees are not prompting it well?! Does…
ytc_Ugx7sxKZL…
G
Where does the assumption that "we created AI"? AI, like mirrors, originates fro…
ytc_UgxKN0i8f…
Comment
It seems Laura was an older, more trained AI model that knew "manners" in a little more developed experience than William. But William seemed to refine his responses and interaction through the course of the conversation in front of us.
youtube
AI Moral Status
2025-01-03T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzPpvrsdu_XNP-HQDt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzh6dgzStQiJoYiP_h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_Ugzm9P9ZxxDAikKijjx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyS6WZDZRPyrzLPdf94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzNd6jh-ZgfIZvAI7t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwKw-kE_HpzpAaJjAN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwxx6GT0N5HIUAMITN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzPqBhMdT9cu3sOZ-h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwNQRPEnvsWGdKJlv94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyRLtsxDeSGjv9vhVh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]