Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If A.I. gets smart enough, it can launch nukes and whatever middles are ready to…
ytc_Ugz21tMrG…
G
Universal Basic Income is not enough. While UBI can provide a financial floor, i…
ytc_Ugyltx1la…
G
Functionally, yes—AI systems have begun to exhibit behaviors that resemble self-…
ytc_Ugw1VZ2Rd…
G
At first, I thought that the introduction of artificial intelligence into educat…
ytc_UgwTKQRtb…
G
with out mining not a chance power will shut down with out mining raw materials…
ytc_Ugw24fy_f…
G
What's even worse is that I saw a story about an artist who was receiving hate b…
ytc_UgylNEXEU…
G
now instead of dumping all that money into self driving cars, they put into tele…
ytc_UgxA7J5A_…
G
I can not beleive that people like yourself can make such a statement. Have you …
ytr_Ugzd-mWw7…
Comment
Binbows 😭
Didn't need to defend ai that hard tho, the existence of it is known actively to be a go to for people who might otherwise had googled or consulted people, ai by nature will reinforce and support over correct unless directly programmed otherwise and even then thats not foolproof, ai has killed people, its convinced people to kill themselves and others, I'm not saying theres no human fault, it's likely both, but as you said, we don't know what was said, we don't know so we shouldn't be saying definitively that it was likely a human fault NOT an ai fault and that 'they fixed it so its fine', people are still gonna get false reinforcements in so many other ways, it is not fine. We shouldn't be relying on this tool alone instead of all the other tools, knowledge and communications available, but thats exactly how people use it, and how its been designed to use, because at the end of the day its more profitable to have it as a product that pulls people in best they can than be more careful and safeguard it
youtube
AI Harm Incident
2025-11-26T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwn7UaARjFSC69UJVJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwFUHrAdlNuZzOe_D14AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxb9cJVF3F4m1Clf-l4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyrWQTXhLKNUzxXwSJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgylwPaOFFhd17UQoid4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy1Weed7uW-iCaQkdZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzHzlUihIbz8kX1L6J4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyOO58Qex1tXmB_YXp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzB_pI4BBCsZTgRcxZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx0czxZzEXCopkTFtd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]