Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI just like Nukes might help humanity to cooperate. Otherwise, in the competiti…
ytc_Ugwk-xArz…
G
I made an AI!
I let him fight against many bad AIs but he was self teaching ever…
ytc_UgyXbNc2H…
G
Nightmare Scenario:
AI has face recognition software. Finds your complete socia…
rdc_kars0gr
G
Not the problem that people make it out to be. We still thankfully have 8 billio…
rdc_k9hrrvq
G
No way this comes the moment when I post a story about AI "artists" 😭…
ytc_UgzYQDVtM…
G
@neko7606 Robotics will be accelerated by AI and we will have humanoid robots ca…
ytr_UgxoGu0gR…
G
DARPA simulations found that autonomous bombers that are ordered to stop bombing…
ytr_Ugza79EGR…
G
AI spend needs to be taxable somehow, and we need a framework for universal basi…
ytc_UgzW7-b8a…
Comment
@iamtorrego hallucinations will never be non existent because the ChatGPT has no concept of right and wrong. It literally doesn't know if the words it is saying are true or not. While better (and more limited) training data will definitely improve it. It still requires plenty of human supervision. Even in coding where the input data is much higher quality, and it is useful, you still actually disregard many answers until you recognise one as what you need. You can just get through them quite quickly. Asking ChatGPT to assist you with something you're not an expert in is a recipe for disaster as you simply won't know how to even determine if its wrong.
youtube
AI Harm Incident
2024-06-04T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_Ugz_GALp9O-msg41hIZ4AaABAg.A46Ul0Dio7NA49AgXCeGzy","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugz_GALp9O-msg41hIZ4AaABAg.A46Ul0Dio7NA49OsK9cns-","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugz_GALp9O-msg41hIZ4AaABAg.A46Ul0Dio7NA49QTsdYj6s","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugyx60GwAmPsAACY1-R4AaABAg.A46SWXTD5TNA46VZJXgMe8","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugyx60GwAmPsAACY1-R4AaABAg.A46SWXTD5TNA4CCCVWceV9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgypF5885oQoXN19aMh4AaABAg.A46RTL5NQHoA4A6uf9gzXo","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgypF5885oQoXN19aMh4AaABAg.A46RTL5NQHoA4GbkL5BEcH","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytr_UgxHlgO2QKiD6XKFUo94AaABAg.A46Q_PlFoHdA48TXK5ln-7","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_UgwyTaQbGoahOm5Q5RV4AaABAg.A46NLLsRDv2A46Vm4E0_Vo","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytr_UgwyTaQbGoahOm5Q5RV4AaABAg.A46NLLsRDv2A47oTO9m6TX","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]