Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes but it's pretty much the same if you would move to the EU from the US.
Gain…
rdc_fwhrbqe
G
I worked on AI in 2014, from experience nobody cares and the smartest people are…
ytc_UgwIw2Kmn…
G
I’m conflicted on AI making junior engineer jobs easier. Have you ever seen some…
rdc_n9rdunz
G
I actually dont even now how to generate a picture with Ai- I think im dumb but …
ytc_UgwHQJWPw…
G
Such nonsense. AI is a flawed overhyped technology with all kinds of acumen prob…
ytc_UgxruyVjT…
G
SICK & TIRED of AI "telemarketers" constantly calling us ! IF we answer the phon…
ytc_Ugz0eD5__…
G
this is not a people problem. this is a post-COVID issue with the new generation…
ytr_UgxZYT29P…
G
But then missiles are launched via automation, when it could be an error. This h…
rdc_k8wjueh
Comment
Elon waxing poetic about the "horrors of AI" simply reveals how he internalized more of the sci-fi he read as a kid and less of the truth about how LLMs (Large Language Models like GPT-3/4, etc.) actually work. The "Terminator-style" eradication of humanity like a boot on an anthill implies that a LLM like GPT-4 is a sentient thinking machine. LLMs are not sentient. LLMs cannot become sentient by "training themselves". LLMs are ultimately a very impressive sock puppet which mirrors our knowledge back at us.
There is a real danger, however, but it is one of job displacement, skill/knowledge rot and misinformation propagation. The fact that Elon completely skirts these areas again shows his profound misunderstanding of what LLMs really are.
youtube
AI Governance
2023-04-18T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw_tnvr4r01lSWTGBZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwOVn3DsfNDFNp92sV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyHJFaE3Qvb3xYx2Ix4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx-9zVCioZhY9co83V4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyPpwwAw1rwUlbXB_R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwJtCKTvApPfOATIsF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx7IktPBjjloTZ627l4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzpdL94KlDqyKMHAlp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzXDKU6GxjSIZYqLYB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzF2eVgG9sI3NKKmJh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"ban","emotion":"outrage"}
]