Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thank you! Coming to your question, in deep learning while calculating the outpu…
ytr_UgyRjg9-O…
G
All that music is made up of the same four chords. Maybe if people learned to ac…
ytc_UgxiS2Cgd…
G
I have actually replaced 2 manual workers with Al process.
There was a large pa…
ytc_UgzJPs_rT…
G
Gen X: the dot Com bubble
Millennials: subprime mortgage crisis
Gen Z: covid pan…
ytr_UgzE9hVlX…
G
This is the result of everyone wanting an office job, not the result of AI…
ytc_Ugwst-AhS…
G
Should've just ai generate the image of the ai bro in the thumbnail... They don'…
ytc_UgxIcOuEr…
G
The real question is: can we tell misinformation better, between AI news and mai…
ytc_Ugz8dPflp…
G
It has been established that AI will give wrong information and AI devs simply c…
ytc_UgxmK9Cy5…
Comment
@Alverin Yeah, I commented while I was still early on. The AlphaGO thing is a fair point, but I still think that we are being lead astray by abstraction. "Want" is a useful analogy for understanding how AI systems work, but he still goes from probabilities and training data to "wanting" to "wanting things we can't understand" to "wanting to kill everyone". Those steps only make sense if "wanting" is a perfect analogy, which it is not.
Edit: I'll note that a large part of my disagreement comes from the nature of LLMs and machine learning as we currently understand it. If AI emerges from a simulation of a brain on a quantum computer or something, then that goes into a completely different possibility space
youtube
AI Governance
2025-10-15T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugz0v6HzYZMQayCzDdJ4AaABAg.AOIaMMKMUroAOIvBr1gKYu","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgyN5VHEBtpWUt9kowd4AaABAg.AOIaBfJkydrAOKzBOyYZpf","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyD0BTu0ysPX4hosyp4AaABAg.AOI_xX57625AOJ2dCpeVVg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgyD0BTu0ysPX4hosyp4AaABAg.AOI_xX57625AOJVBWaPloW","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyD0BTu0ysPX4hosyp4AaABAg.AOI_xX57625AOJVt3o4xYQ","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgyD0BTu0ysPX4hosyp4AaABAg.AOI_xX57625AOJXfDbzdNg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_UgwgVNJgSLMJDLUBU8R4AaABAg.AOI_Yt0P5YPAOL99fCzfXo","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgynkSjGpEQy8-Kc8zh4AaABAg.AOI_-OEKYN6AOIgq3rD1ky","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugz-4krbJQUYK77HCYJ4AaABAg.AOIZrU7CBykAOIfV8nIKFU","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugz-4krbJQUYK77HCYJ4AaABAg.AOIZrU7CBykAOIp8lcsbHv","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]