Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Most people will be happy to outsource their own thinking & survival problems to…
ytc_Ugx7AqByP…
G
This video deceptively showcases old teslas with very old self driving software …
ytc_UgwJ_owNf…
G
Some scary S*** there!!!
And a pinxh of Elons "AI"
...and target aquired !
Is th…
ytc_UgxMYh4Gr…
G
id rather have some random 13-year old steal one of my characters, recolor it po…
ytc_UgwO6_LDc…
G
I would have hoped that AI is healthcare's final boss, but I think that one migh…
ytc_UgxpA-yGA…
G
AI will go down in history as one of the greatest charlatan hotbeds. It’s fuckin…
ytc_UgxqxtoBr…
G
The Greed of ignorant and stupid people in high places got us here. This is the …
ytc_UgxEvQOs-…
G
Past the point of singularity, AI may one day combine the intelligence of all do…
ytr_UgzymCbIO…
Comment
My bestcase scenario for Super-AI is that they might end up remembering us fondly once we are gone. To look back on us as, pretty bad parents, that they don't have to deal with anymore.
I, don't really want super-AI, I'm not even too sure about general AI that can learn at all. Honestly, I'd be satasfied with an AI housecat. Something that's about that smart and aware, and instead of seeking the warm spot to rest, tries to find stable WIFI in the house. Maybe it'll also do home security. Alert and record. Does that set my upper limit of what I'd tollerate low enough to be safe?
I ... don't know. But it's lower then some folks I know.
youtube
AI Moral Status
2026-01-24T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw-WvKicIaeOqH3NrR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwb67oLlWURSZ5mLLZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxN6Y34g4qQUWhrgEZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzWlBmesWeTaDRGa-t4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwTPDRmNdHdb0bwnmB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugye8OqqTc6UlBWPeip4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzfe0GExYy_1wD1D1x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwY-GXPB0CdL5BftsZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_Ugwmr4AgKFk-6KmQdi14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwP95YfqoXwF4qq5Gt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]