Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Agreed. Anytime these CEOs are interviewed saying “AI is dangerous” is actually …
ytr_UgxlYKASO…
G
So how many people do you think AI will allow to live and what will be the stand…
ytc_UgxkW-pgC…
G
Until the ideas turn out to be so unprofitable that no one wants to fund it anym…
rdc_n9hkwuo
G
Just don’t give that shyt ai, let it be controlled like a Drone, much safer…
ytc_Ugx9EP1Km…
G
A.i will buy all the stuff, here's how. The industries will recognize that probl…
ytc_Ugw11_Lbq…
G
No, we know exactly how they work, they know exactly how they work,
they built i…
ytc_Ugzb3ixO1…
G
G.H. min. 7.20 "The BIG COMPANIES" are gonna be very very unhappy about tax…
ytc_UgxJjl18c…
G
This isn’t AI revealing secrets. It’s a stage trick using one-word rules to make…
ytc_Ugw8xoWtM…
Comment
My suggestion is for you all to actually interact with the available 'claimed' AI; none of them are actually 'aware intelligent', which is something that would be concerning. They are just programs, coded with premade/or randomized responses. I wish he'd stop making a big deal about it, it's just a publicity stunt for you to get curious about it and look into it yourselves, but 90% of you will just be parrots.
Stop being a troll Elon.
youtube
AI Governance
2023-04-18T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzOU59ZojLMQCquohd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxQYgGEr9pkhrr-Q0d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz6DD-gbXHEoqOIVYN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz3S-HxLjN9IKb1C5t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxZ1PGrj9W-oNsQi9B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwVSUeF7HVTbHBq3md4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyoY5r_-RzjL8V7SId4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgycEPd4Dw9zSEwX_vl4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzYLpkGfUqmeclIR6p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwQBEPt5cA6wgH61Zx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]