Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How is the AI going to go about maintaining the machines it needs to survive? Wo…
ytc_UgzgeSdXH…
G
Odd how they name Musk but none of the other billionaires with their own AI prog…
ytc_Ugx60amac…
G
Some one asked a.i ::::do you reply in kind or cash??? A.i. replied...I reply…
ytr_Ugy385F9A…
G
AI takes in data, all data. Including Movies, shows, and stories written by hu…
ytc_Ugyh4QIdY…
G
You dont need to be a genius robot to know that humanity doesnt have much of a f…
ytc_UgwqT7Suz…
G
No point in adapting. There is no merit for the artist to use AI. AI art can't s…
ytr_UgwVvVwY0…
G
I'm 25 and I've been drawing for possibly 20 years. I've worked hard on my craft…
ytc_Ugw_2VNcE…
G
Humans: Make stories warning about AI taking over and wiping out humanity.
Also…
ytc_Ugw4HsGGV…
Comment
Wow. What a contrast to the recent BBC interview. Elon is very relaxed and communicative here. The sincerely curious interviewer Tucker ,permitted Elon to finish a complete sentence as well as many interesting thoughts about AI.
Is it possible to program AI to have an ethical framework and help optimize the chance of survival for humanity?🤔
In the meantime, regulating the heck out of this technology is a great idea 😳
youtube
AI Governance
2025-06-02T02:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyuL95WTcIvG7DRK5x4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyUazWJt2xvZKhicNl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyTBmvxR-L2TN-blfJ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyh1pD1wa_4Kd9UBYl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyv06yYn-guJ94gsup4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx-Ssfww2gieyv793B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy6irlMhlD_B0QriCF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwNnDK-yFvcOWGblL94AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxkZNWCMlSCPc4DKWJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxmsvhL1Eu8H08gx0l4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]