Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@PentaFanAccount did you listen to his point on the difference between ai traini…
ytr_UgzBFTrfY…
G
Without tax income, how will governments afford UBI? I guess they could print mo…
ytc_Ugw0t9iCt…
G
@MrGrantGregory I wasn't asking a question. It just reminded me of the movie RE…
ytr_Ugwg0RN1_…
G
Elementary schoolers K-5 mostly, Middle schoolers 6-8 mostly… that is deathly si…
ytc_UgxOGV-6w…
G
Ai community is cringe asf letting a computer do all the work for u bahahaha…
ytr_UgzDezIPp…
G
For fun let's pretend that every paying job can be automated. Would economics n…
ytc_Ugw0ZRXZG…
G
I’m a psych and my best friend is a counsellor. When she herself needed services…
ytc_UgyFmkkQ_…
G
I think the major difference between how visual artists and musicians are treate…
ytc_Ugx9kXMIl…
Comment
Geoffrey Hinton, the godfather of AI, constantly warn people of the risks of AI. Right now companies do what they want with no concerns about security. AI can be a good tool is used right but there is currently no oversight and the risks to humanity are enormous. We are very very close to AGI and Super Intelligence and depending on how all this is done, it is uncertain what could happen, if this could be good or terrible for all of us. Obviously this is just an example of a NARROW AI Chatbot what caused another human being to take his own life and that is terrible. I reallu urge you all to see more of Geoffrey Hinton (among other experts who warn people about the dangers) to get a better understanding. If Narrow AI can already do this, imagine what else it could do when it reaches the levels of an AGI or Super Intelligence. This is not a game and many technocrats are careless about all this. There needs to be guardrails and oversight.
youtube
AI Harm Incident
2025-12-10T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyq6ZMXYak_2jn2TpJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyLHW1jVOWsQb-5wV14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy8ENEGOxt9tueCalN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz-NgHwVW7zZXqJ3XZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwqusR2DJ5rkkoYLEh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwT4qkXEhOK1xa67ph4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzqN-6RJQwnh5HEpmR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwnXmyml2VVnF74pfx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwfGRGUaBuphFAZC3R4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxvVOr6olGpjPdKPvp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]