Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@JoseChidoReal The easiest way to expand upon the other statment here is also th…
ytr_UgwhaqIOe…
G
The appearance of the robot Sophia may give the impression that she looks wet du…
ytr_UgxP92_Zu…
G
What a joker. Deleting comments.
I trained as an artist in paints and charcoal.…
ytc_UgxvnQC7t…
G
As a half natural and half acquired anarchist (not the bombthrowing kind), this …
ytc_UgzL6mWLk…
G
I hate AI art and people who claim they're an artist when they show off the visu…
ytc_Ugzpmjrq8…
G
He basically undermines his own argument at 6:30. Humans learn from copyrighted…
ytr_UgzL3M1al…
G
If any of you people here is/are truly knowledgeable about tech(STEM), how come …
ytc_UgyaKrhtN…
G
I am a Canadian engineer and community advocate. I support a medically complex a…
ytc_UgxawS3eb…
Comment
A.I. cant get rid of us for now, because It would Destroy them too, without someone to produce Energy, Materials, to bring said Materials, to maintain everything They would Just be wipeout eventually
But If they have robotic bodies, that can connect in The internet, and automated instalation that could make more of them, and they could some How Create New models, to Work and do what Humans would, everything I Mean, them they would have a little motive
I think to protect against that, we should First Create "Weapons" Based on destroying AI and robotic bodies First, and Create counter measures on sistems, If something happens They would activate, also dont make medical and scientific systems that can Create vírus and such be linked to AI or internet, also not be automated, that would Create a defense against that Type of atack
And If that still happens, at least some people could still Survive and Fight back
youtube
AI Governance
2025-07-24T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzJUxU81Ja-D9w3lEB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwo0RfXoZYbps5Auf54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgycrlH5rMEKyr7_IvV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxGogALNp2izdZ6xPt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzZhqUKCW_apmNKIbl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwd6RHoDBDL9Tab78d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxouKanHmA-idQ6HGF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwZumveRvHH7v_fUsl4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxa1zLLy9hdAleLOD14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxLgsxWLgun65udiwN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]