Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Mine is exactly the same, they're literally begging for us to use it.
Sometimes…
rdc_ofjk22c
G
I told my chatgpt to warn me when it wasn’t telling me checked facts or when it …
ytc_UgyRVGdr_…
G
I mean I understand the legal loophole they're using. Honestly most of this stuf…
ytc_UgzE5SdZG…
G
Here's a thought for you; The singularity has already happened (cica 2016). Of c…
ytc_UgxbV4gA7…
G
All this does is prove that most artists aren’t as good as AI. The AI one looked…
ytc_UgxmOiYdv…
G
To my older (56 year old) eyes the ai versions almost look like when I watch old…
ytc_UgwPpIXtK…
G
As a senior developer with 35 years experience: AI does help me in day to day co…
ytc_UgwYW-FcV…
G
I love AI. It's the best for the future.
I hope AI will one day rule the world.…
ytc_UgxsadoKo…
Comment
@khzzzzzzzz But a good discussion is able to answer questions. I don't think the average person is dumb. Most people can understand anything, as long as the discussion is honest. Like, we could discuss quantum computing, and I'm not going to say, "it's too complicated to explain to you." I think most of the people who have been interviewed and asked about the dangers of AI simply haven't thought about the dangers, and therefore they have no idea what to say. I have thought about the dangers and could write an essay about them. Dangers beyond what Musk said today. My point about social media is that we have a form of optimization currently which is possibly worse than AI. It's not that I don't understand AI, it's that I have a scientific view of things and that means I require proof. I'd need to see some evidence that AI could create a worse social discourse than what we have today. What we have today optimizes towards lies and hate. An AI might optimize at least some part towards hope. Many of the atrocities of history are born from hope. All I'm saying is, I'd need to see that AI would have a worse outcome than what we are currently seeing, which is the worst case, empirically, we've ever seen.
youtube
AI Governance
2023-04-18T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugw3ck7Twq1EJYlgV6V4AaABAg.9octnWG4eef9oeTCw51Rlh","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytr_Ugw3ck7Twq1EJYlgV6V4AaABAg.9octnWG4eef9ofK775EES1","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgzMj9VisD0PQ_0yNz54AaABAg.9oct-9HMRbc9ocu4-dGF5n","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgyF680OgpbmZyJcr354AaABAg.9ocsOv3V23D9od84cRtajZ","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzkwNGcmwP9qpL-j0x4AaABAg.9ocsCbqjuY29od2qzYWTDP","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"none"},
{"id":"ytr_Ugz08rB3NMf6RMDfgt94AaABAg.9ocrUQQGYNN9od-Ctj_6g-","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugz08rB3NMf6RMDfgt94AaABAg.9ocrUQQGYNN9odPFZYfyEM","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytr_Ugx1Ks00i7FYDbgJa3p4AaABAg.9ocrNH6oLTS9ocsJfzYXqn","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugx1Ks00i7FYDbgJa3p4AaABAg.9ocrNH6oLTS9ocw3mp-lPK","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugx1Ks00i7FYDbgJa3p4AaABAg.9ocrNH6oLTS9ocxRhRj13o","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}
]