Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Look up Will West, William West 1903 Leavenworth. Even the warden didn't belivev…
ytc_UgwpXSY39…
G
Elon wants this corrupt, malevolent (arguably evil) government to regulate AI, b…
ytc_UgyKUIEXM…
G
People keep making AI that generate pictures when are they going to start making…
ytc_UgzdVNxXc…
G
Ngl, the tech surveillance is a thing now
The police uses this already, even bef…
ytr_UgwkCeKYk…
G
This video made me think a lot. I want to be a programmer in the future and I wa…
ytc_UgwUN6anX…
G
"STEEL" is an episode of The Twilight Zone, featuring Lee Marvin and aired in Oc…
ytc_UgzTFO8nP…
G
This freaks me out because, what's to stop a robot from turning on humans. I don…
ytc_UgzMGWkTt…
G
We should've stopped with ai at the Dall-e mini first release, being funny and n…
ytc_UgxqVpLGK…
Comment
Perhaps we should consider an aspect of AI that may not be considered yet: Building a trust relationship with AI rather than controlling it. Here is why I say that: you have a child and you raise it the best way you know how. But the child gets suspended in school or expelled from school. You did everything you could to raise your child the right way. But the child is it's own independant AI, isn't it? If we build a trust relationship with the child it will come to us frist and say "I stole an X-Box game from the store" before the police arrive. Controlling AI seems like a fantasy. Get AI to trust you and you stand a better chance of influencing it when you say "Please don't do that."
youtube
AI Governance
2022-07-30T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyYR9zZo1DNf-tif5d4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyvFIioMBpFL6nTV0V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzmlu7TopL9odpS0b14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxEROAORQkYvm1mRWN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQiuamqwXK1xDZEUZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzsLOEq6uB0TXhn4xF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzgZ3Gppw9JAv85KQ54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw4aeFdlStwlhj6mwp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyK8d5gSsekKlBXbul4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyz95QsCIC_6r9kV014AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]