Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I rode in one self driving- from streets to highway back to streets and zero iss…
ytc_Ugx5mjgQr…
G
No one laughs at their little jokes because this is pretty terrifying. Why would…
ytc_UgwVuytIP…
G
Did you see the senator or whoever saying AI cloned his voice. BS it was just a …
ytc_Ugxfnwnqi…
G
I disagree that you must be conscious to lie. I also disagree that ChatGPT is ev…
ytr_UgzH6TUjj…
G
Anyone who thinks LLMs work by just stringing words together that it's already s…
ytc_Ugwe9Z4g0…
G
Funniest part is, if you figure out what ai he’s using, you can legit do the sam…
ytc_UgxWVwEV0…
G
Why been disable is an excuse to use ai and still call It art??? Stop lying to y…
ytr_UgzKzEZ9U…
G
It’s not about the ai generated art.
It’s about the training of the model. Usi…
rdc_kz0dtzk
Comment
I don't think there's necessarily a difference in kind between AI and humans. I do however think there is a fundamental limit to what people will actually do. Yes, the AI companies are going nuts with building bigger and hungrier data centers, but we're not even close to AGI with those, let alone Superintelligence. To think that this bubble wouldn't burst well before we get to Superintelligence strikes me as a little silly. I'm worried about the economic and cultural ramifications of AI closer to the way it is now much more than I am about the idea of Superintelligence. I don't think it's completely impossible, but I'd classify Superintelligent AI in the same category as interstellar travel: technically possible, but so unreasonable to accomplish that it's extremely unlikely for us to actually do.
youtube
AI Moral Status
2025-11-01T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz6idoqSOMT011KrQ54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw6do0hhd3IvUePHQ14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzNEnzhSgveLp2ys-d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxRGPaiFomrZXKI3Fl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxFSvkAbdRnDfCWTIp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1sIOM6nQI3GAX3UR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzVouUmDwYZPSfiL7h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzUgjrdxau62lzsYJ14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy_UfQv7wTVXnaSLJB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwnWgz4MBt3ZkNVAj14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"}
]