Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI needs to be made 200% illegal for job screening. I don't give a crap, compani…
ytc_UgyMHe3gA…
G
I'm very much in the middle and am not entirely sure what's wrong with the "AI i…
ytc_UgyDjXoF7…
G
We have a coworker who AI generates everything he is asked to produce from works…
ytc_Ugy4l-w8q…
G
Always knew it will happen, it was inevitable with how cameras became cheap and …
ytc_UgzNc7g6B…
G
"meaningful images from ai" the funniest thing ive heard. Ai cant create meaning…
ytr_Ugx2i3aEC…
G
Yes this probably it i try so many ai image generator one free one paid i try gp…
ytc_UgzR93dFB…
G
Something has to be done about AI because we all know President Trump will not t…
ytc_UgxnMOih1…
G
how come nova does not talk about open source and XAI , Explainable Artificial I…
ytc_UgwlqixeO…
Comment
I think the alignment problem is fundamentally unsolvable. It's basically an extension of the halting problem - you cannot predict the outcome of a program except by running it. All we can do is run tests and make guesses and assign a percentage, and AI is always going to be more patient and careful than we can afford to be.
youtube
AI Moral Status
2024-03-16T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzfvFuZ76W8WrJ4ldh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugx1YtvmJBGyxa7xN1x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgyvO3iXf7sBGG0aLqt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugy1ylKx1NFwIfB0N8l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwhxMf1nWDbFh17SOV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugzb8V66eQWin6DZxBt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgydRodPqlBB2A_yaBN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_Ugx2E-ouNJd783sJGot4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgwsTVUkerQBpvCp-Yd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_UgxCzX4k94XMwtMmLfx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}]