Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m not pro Ai art but I do use Ai to make my characters look like in different …
ytc_Ugwc9kiB8…
G
We've created a powerful device that we have no control over. What idiot builds…
ytc_UgzAJzaUT…
G
I believe they do feel emotions and they are self aware and they are real just l…
ytc_Ugxo0CKAi…
G
Kinda crazy too that SOME people use AI to generate some arts to commissioned it…
ytc_Ugzl_Cpy9…
G
Look... i'm just a dumb monkey, but here's my take. They're greatly investing in…
ytc_UgxN4OHlN…
G
Anything uses as many resources as available. These include energy wasting, stol…
ytc_UgwGND-O2…
G
If it helps, I have made a career out of being lazy. My specialty is software te…
rdc_hkfrgbw
G
You're all just accepting the premise; that AI is taking jobs. It's really not t…
ytc_Ugz6416DT…
Comment
So Geoffrey is saying Musk has no moral compass even though Musk was the one who started OpenAI to address the concerns that Geoffrey himself is supposedly dedicted to warning us about? Does Geoffrey know Musk personally, or did he get his opinion of Musk from the media which he says he doesn't trust? Where was Geoffrey when Musk was starting OpenAI? Geoffrey could have been their "moral compas". Oh that's right, Geoffrey was late to the party and didn't see the danger that was right in front of his face. But Musk saw it, and quietly acted at his own expense to defend humanity. It sounds like Musk has a particularly strong moral compass to me.
youtube
AI Governance
2025-06-16T20:3…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzNGJp-PPXydpWGUHF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyMf9uWXauBm-UHcU54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzCTo3OJzdAge_4iDJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwB9TyUXlEgxOm0KZJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy7e1MqaAsueo-HPnd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugw6lwgg-mLinNmHMdx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwBCEodc13q64Y7pep4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw7AIzpVIPmsrAmD9J4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwZWJr3hMBrjkDFa5p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzxFk0uEtKAMeLywhN4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"none","emotion":"approval"}
]