Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Do you understand that a virus does not care about your patent?
Do you get that…
rdc_grrsahn
G
Literally did the same thing looking for some simple distilling recipes. Somethi…
rdc_jhgfzac
G
Try understand virtual machine design where limited physical manufacturing can m…
ytc_UgzKTxyDo…
G
Let me ask you something (this is coming from someone who used to use ai but doe…
ytr_UgyYdK-GA…
G
He gave chatgpt a downgrade. I don't think his goals are same as before. He dest…
ytc_UgwV7_bq4…
G
For my r/iasip ball knowers out there, we’re basically witnessing the Silicon Va…
rdc_oi2sm73
G
Only level 5? Don’t be ridiculous. I use Tesla autopilot all of the time. It ADD…
ytc_Ugwy27zHi…
G
@AliceB0 but is a certain mathematical function that simulates what a human woul…
ytr_UgyppJggI…
Comment
"Even when we say we want peace and stability" Who is this mythical peace loving altruistic "we"? Humans don't say one thing in agreement. Incentives, different for each individual, combine to create human made catastrophe and crisis. How will AI differentiate between what some hypothetical doom craving Western corporatist politician wants and what a subsistence farmer in Malawi wants? They are vastly different things. For example Mr Yudkowski here wants to be more famous than Altman, and sell books, to soothe his god complex infused ego. Even if all the doomerism comes to pass, it would take only a few thousand humans or less to allow the species to survive. Like the ones who have never owned an iPhone - not every human is a carbon copy internet trawling Westerner. If that happened it would ironically solve all our other problems like over population, resource scarcity and climate change. And even if none survive, global level extinctions have happened on this planet before. Time will keep ticking by, and we'll all be dead so we won't even know.
youtube
AI Governance
2025-11-18T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgyjG32YOUy-BJ4NTgl4AaABAg.AQ9BSJh1pP7AQf-PzhlAkT","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyjG32YOUy-BJ4NTgl4AaABAg.AQ9BSJh1pP7AVV7tIsEivA","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytr_UgyZZ9jHk4bbQfEP0Dx4AaABAg.AQ4UwEPAzCsAQ6hbqUr0k4","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugy4HfiN8Djm14kV0pR4AaABAg.AQ-hela8k7eAQgL20d3719","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugy4HfiN8Djm14kV0pR4AaABAg.AQ-hela8k7eATyhki_unaX","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyXF8A8_gVuWh8jAWF4AaABAg.APrBnahp569AQ-AheTQaPD","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytr_UgwH9PmWEIV3zB8Zh2F4AaABAg.APa2ECqMBskATNIK6TpVv2","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugz8bDUbH-UTVcgh8tp4AaABAg.APUwAuDaDfvAPgCAuSchC2","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugz8bDUbH-UTVcgh8tp4AaABAg.APUwAuDaDfvAPgYsl9AY7V","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugz8bDUbH-UTVcgh8tp4AaABAg.APUwAuDaDfvAPguw1pIMuR","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]