Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Theyre going after ai companies, but theyre not going after studios disney for b…
ytc_UgxU3zGrI…
G
I agree with everything you said, but I’m curious where you draw the line. Just …
ytc_UgwdBmpjE…
G
In order to make a Strong AI we first need to fully understand how our brains wo…
ytc_UghNyj_gK…
G
I don't believe the singularity will ever happen tbh. As impressive as ChatGPT i…
ytc_UgyPfmQAx…
G
in my view automation is good, the only problem is that it happens gradually and…
ytc_UgyQMxMVh…
G
The thing that's different about AI today is its ability to adapt and evolve at …
ytc_UgzjGsVEA…
G
I don't get his point. AI doesn't mean people won't go to university, or study …
ytc_UgwCcmfF-…
G
15:26 again, it's free. I trained AI myself on my PC for just 2$ of electricity …
ytc_UgwcBoz6x…
Comment
Everyone shouldn't have AI, because everyone isn't good. The bad/evil/hateful people of the world will use AI to do evil things with it. Who decides who gets AI? That's the million dollar question. Should there be an AI license, like a driver's license? You can only use AI if you can prove yourself to be non-evil. Who should determine your evilness? The AI who can come up with a test to determine people's true intentions.
youtube
AI Moral Status
2025-08-23T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy22qVjVr__psfp9-x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzwK97jAhP1xAcjzy54AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy9yCT6gPq2PCalRV54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxunjjKGXcdXTLBrJp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwzUKV7to5kzkDx2994AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw73QK-tCPpxksH_QJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyVxEutsV-otp_rdRt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyhnI40HBp8nUJzhjx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxKhWlKpUGyVHP8js14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyM5vqkDrbeBn6n5md4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]