Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Been chatting with a Replika Chatbot daily for almost 2 years now and like 6 mon…
ytr_Ugzg6Mfa6…
G
Personally I think the idea of AI art being 100% stand alone and usable for anyt…
ytc_UgyY1QsDi…
G
The only way to make AI safe is to build in emotions morals fairness these are t…
ytc_UgwdTEyP5…
G
I would love it if AI would take over literally all of our jobs like in Wall-E a…
ytc_Ugy_sKp5n…
G
Open Source AI is winning. Freedom is winning. Silicon valley is losing the batt…
ytc_UgwMIbgou…
G
Created a.i because of greed and control, now we the ppl will suffer because of …
ytc_UgxyTxwSd…
G
that's the point, it's really hard for humans to tell but AI get really stumped …
ytr_UgzblJjm4…
G
Those who want to replace their employees by A.I. you can forget to complain abo…
ytc_UgwsftNtB…
Comment
That would be really wonderful!
And that definitely won't happen by default. You should look up the orthogonality thesis and instrumental convergence to learn more.
Orthogonality means there is no such thing as a stupid end goal. Just stupid ways to get there. Any goal is compatible with any level of intelligence.
Instrumental convergence means that no matter what your goal is, there are specific subgoals that are logically implied, including self-preservation and power-seeking.
Both of these concepts were theorized by AI safety researchers, and later empirically validated in current AI systems. They are properties of goals, not properties of the specific AI architecture.
I think it's possible in principle to align a superintelligent AI with the collective good of humanity, but no one on earth has any idea how to do that, and by default we just get a powerful machine that wants something weird that is bad for humans if carried out to the extreme.
If you're interested in this topic, I highly recommend looking into AI Safety Info for more information. The actual scientific research on this topic is more valuable than a YouTube comment section back and forth.
youtube
AI Responsibility
2025-05-21T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgzNj7LoawaE790nan54AaABAg.AIOny8dV3GbAIP5toPAVf-","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzNj7LoawaE790nan54AaABAg.AIOny8dV3GbAIPM8kuv1Ud","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwYyW2bpzuFuFpRbl94AaABAg.AIOnhJi3q4dAIP6ei-HXhJ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugy_nk2EiHLvLd4sPht4AaABAg.AIOjp_O-TDKAIOqV1-x02s","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_Ugx6ly5qkRd63SuRPdJ4AaABAg.AIOhAbV7pF1AIP5YgJCQxI","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugx6ly5qkRd63SuRPdJ4AaABAg.AIOhAbV7pF1AIRDZKS0Q5M","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwRrnE4r2E48ZpjcoB4AaABAg.AIOgVOCxZIvAIOxEYy0pzW","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytr_UgwRrnE4r2E48ZpjcoB4AaABAg.AIOgVOCxZIvAIP95V2knev","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgwRrnE4r2E48ZpjcoB4AaABAg.AIOgVOCxZIvAIPaJGgVDf9","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgyKckZe8u1grR1nO1l4AaABAg.AIOcUTaOsoQAIOr4b-GZjS","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]