Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Again you enveloping tech and finding blame in human error, its up to the person…
ytc_Ugz63f-TR…
G
I can't wait for all the Chinese governors to make the specific logic rules that…
ytc_UgzNBL5lv…
G
Some country will ban a lot of automation and AI. People will go there for welln…
ytc_UgzDDG6Pw…
G
they have to have the terminator scenario on the table for the massive over-capi…
ytc_Ugw45GTud…
G
0:38 *je suis sûre qu'elles ont dû la faire plusieurs fois cette prise.* On voit…
ytc_UgyLnSF2A…
G
I think they are doing to purpose to make it harder to distinguish real content …
ytc_Ugxo8RSg6…
G
You can use AI to make art. If you make the AI yourself and train it with only y…
ytc_UgxKpoCh8…
G
It IS worth noting that, once automation of work makes it “across the line” and …
ytc_UgwDZF-QR…
Comment
Why on earth didn’t you immediately step in when that guy said, “Elon has no moral compass”?
I’ve really enjoyed watching your shows until now, but something feels off here.
Elon is deeply concerned about AI—he's one of the few voices consistently warning about its dangers and pushing for safety standards. He’s also building the only serious defense we might have one day, should we need it.
Clearly, that guy has no idea what he’s talking about. Anyone genuinely interested in AI knows what the key players say and think. And Elon is the key player—by far the most thoughtful and safety-conscious of them all.
That’s one of the main reasons why Grok is designed to be maximally truth-seeking.
youtube
AI Governance
2025-06-16T08:1…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxLku76oIu_RFml3BF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgysstcLkkygCYKoWAJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzE4n2PB7mDqwnJU7N4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxAQfnpSA1THoqb8jl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz2jwHHq7739qwC36t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy_kvK5TzhlgtCwzal4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-tVMUNCa_4XC_u0Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwOvcWxk-W99gcY0CN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxm9d2kE0yC8yOcDdt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw63_Sw6ADWN6-DAIZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}
]