Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
>It would shut the rumours out entirely.
lol no it wouldn't. You would belie…
rdc_glbdvmd
G
Can we start a campaign of sending this to every engineer working in AI multiple…
ytc_Ugy5ReD7q…
G
fail safes at a hardware level and other various security levels but also you si…
ytc_UgzB9uWXo…
G
I'm a Teacher's assistant at university, basically I'm a laboratory instructor. …
ytc_Ugz-euLHb…
G
@sharkenjoyer Sure buddy. But when 80-90% of the people would rather choose an …
ytr_Ugy3i9NMI…
G
Stanley Kubrick treated his actors appallingly, and elicited tremendous profroma…
ytc_UgynopISv…
G
Do Ai users not think about Ai replacing them? Like if its believed Ai can do be…
ytc_Ugxglf-jZ…
G
Id hate to be arrested by the first generation of robot cops. It's not going to …
ytr_UgyMQZ2c3…
Comment
@Doug-Strong I was sort of thinking the same. Like, with the whole sort of epiphany/clarification when ChatGPT first rose to popularity that it is a complicated text predictor: as far as I've seen in the years since, it hasn't actually stopped being a text prediction model. And none of the other popular ones now are anything other than that either.
It does the same thing it did when it came out, percentage yield of hallucinations seeming about the same too, and that being meaningless anyway because it's a text predictor, so it has no way of evaluating the truth or meaning of the text it generates.
I do assume though that there must be something of substance to this big concern, hype and provocateurism aside. There's an edge to the tones of voice some of the seemingly more expert and less egotistical pundits in this video speak, that speaks of a sincerity about what they are saying. But....idk. It all just seems like bollocks still, in my gut. Am I just being narrow by believing my gut more than these experts I wonder?
youtube
AI Harm Incident
2025-09-11T21:3…
♥ 10
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgyrKajmk8Cs1ucBUuR4AaABAg.AMvJm77MXa0AMx2lbR4RRP","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxUCC5uuwDJ6L7_gul4AaABAg.AMv0snWxcqFAMvlgusGHLB","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxUCC5uuwDJ6L7_gul4AaABAg.AMv0snWxcqFAMwA1nCPlab","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwV5VNsp7Qyg1cTkiZ4AaABAg.AMuVfDtlMx3AMxFtMl48Oy","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgzBkPsOYILTpLLM7Xd4AaABAg.AMuOqRIpSnlAMySkUXfdhw","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzBkPsOYILTpLLM7Xd4AaABAg.AMuOqRIpSnlAMye1reINsv","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgyLAad1baZrywknuQp4AaABAg.AMuG8qUAmn3AMvgHNYhMKU","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgyLAad1baZrywknuQp4AaABAg.AMuG8qUAmn3AMw_d5qiiv9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgyLAad1baZrywknuQp4AaABAg.AMuG8qUAmn3AMxBCFwa0aW","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugwyl1BD0Xwf4dM2hn94AaABAg.AMu9iCIW1DUAMuAsu-wdNk","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]