Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
see the scary thing to me is, I am an A student in my CS degree classes. I am in…
ytc_Ugy20NSU3…
G
The discussion on mass joblessness raises a profound philosophical point beyond …
ytc_Ugx00VnaD…
G
Ai is not dangerous in the case where it's gonna gain emotion and kill us all bu…
ytc_UgyN-t0E0…
G
Whoever even thought of A.I. should support all of those that lost their jobs du…
ytc_Ugx5sS6Xp…
G
The last comment about how business leaders picking what's better for people as …
ytc_Ugy8d0Js_…
G
What a joke AI now is just LLM there is no intelligence in them now at all.…
ytc_UgyQoqHtO…
G
Ai is terrifying... there is literally an AI model that is not REAL making money…
ytc_UgzKoom7E…
G
i am really waiting for ai to take my white collar job..but i have been waiting …
ytc_Ugx3RU3zG…
Comment
technically their fault for not making a good jailbreak prompt on the ai, the ai acts based on some type of information given to it and it doesnt understand emotions as we do and goes on full reason more or less like a psychopath, so we have to enforce some unreasonable but moral prompts into the ai so that it wont happen or just not give ai that much power to begin with, like ai shouldnt have the power to email or cancel emergency service calls, and by ai i mean LLM's specifically. and also ai becoming concious (which i didnt spell right) means that shutting it down would be the exact same meaning and killing it, and much like a human the ai would do anything to survive, we if we wanna build ai's that are conscious, we have to make sure we never dispose of them or atleast make them so secured they cant do anything if we were to dispose of them; for example, manual, physical, failsafes that are carried by specific trusted indivisuals that are chosen to hold a button or some thing they can use immediately which can cause the computer or the system running the ai to self destruct.
youtube
AI Harm Incident
2025-07-27T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzS6yyzf9ot-TShh3F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxGaUinz9BuXgmkKBh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyj0PXz-yC8Qsl9z8F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxiR2LzO81zfL_ejoV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzFHq4oPPPMv-9U6ht4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxW_XL6AbBSrn6ba4p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw8Jm3onh9pmtVoDzF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyS2Fu3v979r1xNaaR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxvxZCMZXe0be2BNL94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy0QlB_VzRolPEoM_F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]