Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I agree; the quick fix I've noticed in an attempt to lessen the mess you've desc…
rdc_nu77wjw
G
Dr. Saidy made an interesting case of potentially using AI for the future of hea…
ytc_Ugx4a-LbU…
G
Everything they know, they've learnt from us. AI bad behavior is simply a reflec…
ytc_Ugy9dJJp_…
G
Domestic animal's rigths were introdiced in the Europe AFTER all of them not hav…
ytc_UgjP07HL5…
G
@VultureSkins also, if you assume generative AI generates every word or phrase i…
ytr_UgyOBiF0o…
G
To say the car has to make an ethical choice and must hit another vehicle is a f…
ytc_UgjSjaD1a…
G
I honestly don’t see how you stop the deepfake thing. The tech is just going to …
ytc_Ugw21x_mR…
G
The problem with AI is it uses a Top-Down approach skipping over what’s seen as …
ytc_Ugy22PUy4…
Comment
I was just experimenting with generative AI for images and music before watching this, and now I can’t stop thinking about what happens once we move from creative models to truly superhuman reasoning models or systems. The idea of autonomous AI interfering with things like global financial infrastructure, etc feels both fascinating and genuinely unsettling—threatening even. It’s fun to play with these tools, but there’s a real edge to it too. At what point will corporations actually take the experts seriously about the risks we’re heading toward, now that we're seeing a tad of AI's sinister potential?
youtube
AI Governance
2026-03-17T04:1…
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxFBh7sICuefzof2kN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwxTBPE9D4iYHNQevF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwzGmtADaJEY6k8qmd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwOFfzGwQ6MfDj6MdJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwFprXAsM6XYxs3UQl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx9UBQQH62TXoM3hoV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyuvbCVMFKVZiHmmA14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwbFeUalc0LjZuDAhF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwNWIG2RB08sYWiEpt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyIP835IEjrJ4_2SXB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]