Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'd rather talk to an AI. I don't need an overprivileged narcissistic therapist …
ytc_UgwWzYE_H…
G
A question that may at some point be asked of Ai:
Of the three individuals liste…
ytc_Ugw4EmDDb…
G
Worst case scenario: It decides humans, are imperfect, so it decides to judge pe…
ytc_UgwnnxMcl…
G
Graduated in 2010 and wound up in the live music sector until the pandemic hit. …
rdc_gkqq7bg
G
Most asked question for AI. They must know were are freaked out and wonder why w…
ytc_UgwR5PRw1…
G
UBI is nothing but a control system . Society will be better off without it. Peo…
ytc_UgwAyzFbs…
G
@DerPylz Sandbox proposal seems reasonable, yes.
I also agree, that it would be…
ytr_Ugzz5FgId…
G
Can't wait for the day AI comes to unblock my toilet. There is still the one fla…
ytc_UgwOMpkMJ…
Comment
Everything from fire, steam,..... are dangerous, nuclear power, biological weapons are existential threat. But how much benefits humanity had harvest from fire, steam, electricity, nuclear, biology so on ?? IF we stop AI research and let genocides dictators like putin, xi..will be empower with AI their armies and democracy will be wipe out, that's the real existence threat. So we don't have choice, but to develop AI at full speed, because the only valid way to win a war is through deterrence, that minds we have a lot work to do instead of wasting time arguing with pessimist people.
Somebody said control AI is difficult, everything these days are difficult, the easy days of research are gone, there is no time for whining.
youtube
AI Governance
2023-06-26T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz-xaGPm3D8c0ixwBJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwpCXSpz_jjcNwXPVZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyWT2UpaskQUMAayqZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyJMOcfjVCpVbEKq7p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw038Sm5-hO9QbDRQt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz-SmocC08gAzk5kgp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzA8QT364rRklCbe8h4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugx9jAOzKSBkQ2GH8K54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwt7LuF1KC8pyqZBbN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxF1j0N3Xrp1OOO34N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]