Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
closing -
lauren and Adib are "temporarily embarrassed billionaires" ... and th…
ytc_UgwCET5zv…
G
all the technology is there except the AI, but someone could still plan overall …
ytr_UgxkRDtc-…
G
We just co-exist and things will change but maybe we can focus more on our plane…
ytc_UgyZUIXw_…
G
lol don't underestimate A.I. y'all are going to be eating your words, it is happ…
ytc_UgwyGSmPT…
G
Thank you for your comment! In the video, Sophia discusses embodying wisdom and …
ytr_UgzRRkiyQ…
G
Ai art looks like actual art that people are just jumping to conclusions that so…
ytc_UgxUqhFm0…
G
So he had to go to school and get a new job.Because the robot took his job yet, …
ytc_UgyQQkinX…
G
wow... this is sad he didn't just ask chatgpt if bromide was harmful to humans..…
ytc_Ugwpy-oKX…
Comment
I'm 24 and about to crap my pants for this idea of AI taking over like a real life terminator. Why do we need AI this much!? Can't we just use AI for small things, instead of possibly trusting our lives to their hands completely??!! Also what good does AI actually do with so much power?
I hear mostly harmful stuff about AI, like how some people will generate a full nude pic about someone without their consent, or asking AI to teach them how to manipulate and successfully scam someone. Or then they just abuse AI in other ways... Because even if AI didn't tell straightforward how to manipulate someone, I don't see why also a little human being couldn't trick AI as a goal to cause harm. Also if AI is supposed to be clever and know even more than what we humans already know, I don't see why AI couldn't develop some sort of intelligent understanding about morals, manipulation and just generally about how fucked up our society might be. So also for this reason I think it's way too risky as a human to just give orders to AI to not attack people or take over, if we also as a human can't promise AI back that kind of safety. Even if they don't have physical feelings, it's still a high intelligence we are against with...
youtube
AI Harm Incident
2025-09-12T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyDvrUM_CjHGW8GmK54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxlwSrBylmxVvkjAeN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwmGpBsXxbFcjXff5J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzQwVf0LsEB7fC3xBl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxuVuK9qTacj8joICJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwpJwVdgX32nIvjS694AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy4LbCrE2kkGbCJEaR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugy01I7I5GlxWyaBC7d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwdJvd7EfNQdxC5bzt4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzOqyyU9Vymm_6K3fN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]