Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai Art controversy would literally not exist if not for late stage capitalism. W…
rdc_k9hk3ea
G
my friend has this and still makes fantastic and creative artworks by hand
you d…
ytr_UgwWSqeqq…
G
The WHO can't, but if countries come to an agreement, it would be possible.
As …
rdc_grr921y
G
Who's going to buy stuff AI created if people are in poverty? It wil crack the s…
ytc_UgxkBNwWP…
G
What about ultra low power network running under 1GHz that users have no clue ab…
ytc_Ugwj3e8vn…
G
Rules don't apply to the wealthy or corporations. Just like with self driving, l…
ytc_UgwTNrQ5v…
G
@markaven5249 the interviewer asked something similar with the question "can't a…
ytr_UgxTHnLdX…
G
It seems like a pretty obvious outcome, LLM models are largely trained on intern…
rdc_kozr5y6
Comment
@kittywampusdrums The majority of AI engineers at the leading labs say there is a significant chance of human extinction from AI. The Center for AI Safety (CAIS) put out a statement about mitigating the risk of human extinction from AI and it was signed by most of the top AI scientists in the world. Published AI researchers gave an average chance of 1 in 6 that AI would drive humans extinct this century.
I also encourage people to actually learn how AI works. Read the actual papers. You'll learn that excepting rare cases where interpretability research gave us a clue, no one on earth understands the internals of modern AI systems. You'll also learn that LLMs contain abstract representations of the world, and they have internally coherent preferences, and that they are becoming more agentic (behaving as if they have goals).
You can also learn about the principle of Instrumental Convergence discovered by AI Safety scientists, which argues that almost no matter what goal an agent has, there are specific subgoals it will always have, such as gaining power, self-preserving, gaining resources, and reproducing. (This was later mathematically proven, and then was observed dozens of times in independent experiments with current AI systems).
Learn more about AI, and stop believing people when they tell you everything is definitely fine. The 5 most cited computer scientists on the planet say we're in significant danger.
youtube
AI Moral Status
2025-04-27T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugz1H5JJzdwHQPJYo454AaABAg.AHOdwYbILlUAHOfvgFX6SY","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxK-l2ZP41loCLqNx94AaABAg.AHOcXBLemzKAHP-USNUptv","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxK-l2ZP41loCLqNx94AaABAg.AHOcXBLemzKAHS2RFtuu6R","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgzoRu3_W-UgofvRr5t4AaABAg.AHObNjMEWeOAHOgAuH_kyF","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgznqhngPdcAmHluP_p4AaABAg.AHOY9Jr9jECAHP00izhxzr","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgznqhngPdcAmHluP_p4AaABAg.AHOY9Jr9jECAHP24vHOMT_","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytr_UgznqhngPdcAmHluP_p4AaABAg.AHOY9Jr9jECAHPS3ysGTf3","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgylGmOEwcVrQcjo6H54AaABAg.AHOW14QCSVWAHPVIP6jii9","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgylGmOEwcVrQcjo6H54AaABAg.AHOW14QCSVWAHP_cRWJ4Cc","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytr_UgwM5b-WYKTTDKGSnUN4AaABAg.AHOVxjH4jcAAHRi7lYnI81","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]