Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The only way AI will make things better is by taking power away from narcissisti…
ytc_Ugz1-OpVw…
G
New YA dystopian series: 2 dozen artificially immortal tech gajillionaires ownin…
ytc_Ugy1JyTLS…
G
Great reference! "Ex Machina" definitely raises intriguing questions about AI an…
ytr_UgwtnTu93…
G
AI is absolutely awful at coding. If you ask it to do anything even semi-complex…
ytc_Ugx1dJZRl…
G
When we say superintelligence, we are already talking about an intelligence that…
ytr_UgypaiNF7…
G
No, that's not the reason! The *real* reason is that AI is programmed in INTERC…
ytc_Ugwg3g6tY…
G
@user-mp3pf2ir2phow is that gate keeping? It’s common sense. How long has cellph…
ytr_Ugx8tEK_M…
G
They are going to start the war with AI because AI turns out not to be a sycopha…
rdc_nta51xo
Comment
All she is saying is that AI can be dangerous but there is no good reason (yet) to believe it is dangerous in the rather sensationalist way that Tegmark and a number of others seem to believe. It's really difficult to argue with True Believers who keep repeating their mantras over and over but I think Mitchell did as good a job as anyone could.
youtube
AI Governance
2023-07-17T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytr_UgzZBZa5vsqXpN2YZ2t4AaABAg.9rsOXTlFMvX9sHGTRNB36K","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytr_UgyP32EFA3Y5ktq3NCR4AaABAg.9rq9WbI78bQ9rsKHLjQ-rd","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytr_UgyP32EFA3Y5ktq3NCR4AaABAg.9rq9WbI78bQ9rsnnSxiPBn","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"fear"},{"id":"ytr_Ugx9bhtLneJ2aN4J9xl4AaABAg.9rpjteLMIMZ9t1-RcsIlgQ","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytr_Ugx5LT0M-B6vvyirP9Z4AaABAg.9rohS4EIjnX9s0zPcrwSe2","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytr_UgzhgbL9ssnNPSoPVXN4AaABAg.9rebYMGdY7H9viUlqHIxPn","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"resignation"},{"id":"ytr_UgwyYrot6kYsPsGLlRR4AaABAg.9reGNNANkzS9s-mgyxbLXl","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytr_UgwyYrot6kYsPsGLlRR4AaABAg.9reGNNANkzS9s2cWYjwYHB","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytr_UgxUDcaHQU7hVLpShSp4AaABAg.9rdSkgnSv1F9s3w0HmoPgU","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytr_UgxGaW9p18AEp5IotE94AaABAg.9rcov6TyeMk9sFJ0z6J2yF","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]