Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1: Enslave an artist
2: Make him finish all your drawings or draw from descripti…
ytc_UgwFEBwMR…
G
If AI could get tricked into saying secrets to a researcher and cant realise tha…
ytc_UgyUQnzdD…
G
Will I actually be able to speak to AI when I call government departments, I mig…
ytc_UgxrY0Bbz…
G
Agreed. The biggest thing I don't understand is developers will ask to refactor…
ytr_UgxkublFe…
G
But if they understand that there just as easy a target if not more so as the R&…
ytr_Ugw5Gj2u4…
G
Disabled artist here, AI art isn’t art, it’s a lazy collage. The prompt is more …
ytc_UgxTIGGfM…
G
AI will make engineers obsolete. In a sense it will indeed level the playing fie…
ytr_Ugy2OvNLm…
G
I'm using AI to make my game I've written for over a decade....not because I don…
ytc_UgwRxJZ9l…
Comment
It's stupid to calculate the risk of humanity disappearing because of AIs - there are an infinite number of unknown parameters! No wonder the results are incoherent. Nobody seems to realise that an AI depends on humans to function, to repair itself and even... to have a purpose: an AI that is no longer interrogated is like dead! It has no will of its own, no drive... If it seeks to protect itself, it's because it's trained with human data, and for humans disappearance is a tragedy: so it's trained to avoid it. As humans also have the right to self-defence, it is quite natural for them to react excessively in the way described.
NB: a simple jet of water is enough to destroy any AI: short-circuit.
youtube
AI Harm Incident
2025-07-27T17:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzS6yyzf9ot-TShh3F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxGaUinz9BuXgmkKBh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyj0PXz-yC8Qsl9z8F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxiR2LzO81zfL_ejoV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzFHq4oPPPMv-9U6ht4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxW_XL6AbBSrn6ba4p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw8Jm3onh9pmtVoDzF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyS2Fu3v979r1xNaaR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxvxZCMZXe0be2BNL94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy0QlB_VzRolPEoM_F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]