Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
hey guys lets blame ai instead of a moron making dumb decisions because that mak…
ytc_UgzJRXVV3…
G
Can subjective reality is only created by humans if ChatGpt can create it ??? Or…
ytc_UgwRCR2b0…
G
Then ask the chat bot what it means to be alive, 😂 my gameboy is alive too and n…
ytc_UgyuRCty6…
G
Thank you for sharing your thoughts on the honesty of robots. In our live broadc…
ytr_Ugw2YormF…
G
When they are released with the latest AI, they will also say : Sorry I only dat…
ytc_Ugz6Cwi0d…
G
Why the surprise at the 'driver' being on the phone? Of course she was on the p…
ytc_Ugz0LPFwC…
G
If you ask ChatGPT a question, it will not answer the question because it does n…
ytc_UgyG44Zt4…
G
If we don't understand something, and it makes decisions for us or tells us what…
ytc_UgzE2l2vd…
Comment
In my opinion, LLM-based AIs can't reach AGI. They will always lack their own creativity. Actually, I think they'll plateau very soon, if they already haven't and the progress only comes from better integration. But their capability to learn quickly and store knowledge immediately is already very powerful and definitely can cause an immense damage if someone releases it (intentionally or unintentionally) to some critical system. Because AI is sucking in even the malicious ideas humans are feeding it with and without the morale...
youtube
AI Governance
2026-03-21T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyp0I2usYT0GC7x5xV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwyA7v3P_2lJEPWw1F4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwDEgCu0GlOmHrfs2p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwBV7ze01zSJzgWsb54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyxysSxSJVJGKhYbWp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugws0vRXy_AXojH1WTF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyKi9_vtgIPKfpzgZx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxlw2FFBiUu0ygxYcB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxp_unGNnky7Th3GlJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugylbebws2UGC-q0K014AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]