Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Im not pro AI when it comes to art but I think I needed to hear this in a way. I…
ytc_UgyC-1BwG…
G
I think we're fighting the wrong thing. The problem isn't AI, it's the system th…
ytc_Ugxh6ArMh…
G
@BluePaleSignal It's not Generi that is sociopathic here, it's the host. Moreo…
ytr_UgylKNmY_…
G
If an AGI is able to interact physically with the real world (which requires bot…
ytr_UgzTsdWzj…
G
WSJ scare campaign. As usual. You guys don't even know the difference between Au…
ytc_Ugy_iEU5L…
G
Exactly physical jobs are irreplaceable. AI can't takeover the backend parts of …
ytr_UgwCCpMOG…
G
Ya know a 1 in 6 odds of AI obliterating our species doesn’t really seem like it…
ytc_UgzXzVhlv…
G
For a while now, I've been discussing the concept of AGI purposely failing Turin…
ytc_UgzqK659i…
Comment
Personally although I think Mr Musk is correct to call for this.
The reality is though that the AI arms race is already under way. If one side calls a truce while the other side quickly develops their AI - WMD what will be the outcome? There is no doubt the leading protagonists in Non Judicially Regulated countries will relish any pause in the West.
Sadly the cold war taught us that the only strategy that works is Mutual Assured Destruction (MAD).
This all beggars the question 50 years on, Has humanity as a species in its morals and principles really progressed, or are we in fact a declining species determined to destroy ourselves and all other species one way or the other?
youtube
AI Governance
2023-03-30T07:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzVxLNzt-1oV3hnM954AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxtlQ7yTK5y0FWjght4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyfqC32OY50_vSULcJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzs2F1Sw0wEuXjFG_R4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzHhwBvIcJAtnEn1_d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyIH7TI98iGSJa3fzF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugws9LG6oGPBF1DhUfx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz-q3PTXHISAP4EpNh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyQ_guukERQ4tin8OJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxuw4r-QaSU79GjGdd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]