Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
this is mayority of a way to much of a strech, first of all, if there is a AI co…
ytc_UgyfZOkvE…
G
Dr. Tanu Jain makes machine learning easy! Reminds me of how Pneumatic Workflow …
ytc_UgwQwU1hw…
G
AI will only replace the NPCs… if you ask me, we will see no difference. 95% of …
ytc_UgxSfUiXr…
G
They cannot - they opened Pandora's box and now the only thing they can do is wa…
ytc_Ugx5tNEui…
G
Is AI another thing that will be destructive due to underlying patriarchal syste…
ytc_UgwwUpbNh…
G
As a disabled artist I find it really offensive when ppl say "ai art is helpful …
ytc_UgxI3GgI-…
G
I will never relinquish the the steering wheel nor will I be driven by a driverl…
ytc_Ugw6TrgIk…
G
@TheDirtyBirchTrails The thing is - human is responsible. AI can kill you (in mo…
ytr_UgwpnoCLA…
Comment
Just so we're clear, it's impossible to program a consciousness into a machine because programmers can't define or understand consciousness. A.I. is still exactly like a calculator: it does exactly what SOMEONE programs it to. If an A.I. algorithm acts like a super weapon, it's because someone put those commands in there. This video is exaggerating a lot of realities about this and I'm tired, as someone that works in I.T. and dabbled in programming, that the average person lets sci-fi just....break through their reasoning and logic. The terminator movies and the like are literally just sci-fi. You have far more to fear with aliens and the CIA than A.I. super computers.
youtube
AI Governance
2024-05-12T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwraGR58oUPWdZDNFt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxV_EbW86XXDOqQOTN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxMEkRp6fLgHLaSdlZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwwZ15fFu3HEad2c114AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugy0M1bf4p4RoTIlI1F4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwUYG8KfcFhWAmzJdt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwU4DhLGb4bgPVtCMN4AaABAg","responsibility":"society","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgySUqad2g6xje5wcNR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzY_gf-A_mj0DZ-st14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx8zDPauRMNKOqWk7Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}
]