Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No, not really it depends for most people actually. AI is just typing and slop, …
ytr_Ugwcu4aKT…
G
So... how close are we to (ani)Matrix, and if, unlike in (ani)Matrix, someone al…
ytc_UgxWV79EH…
G
After the horror of my brother inflicting "Jurassic Park, but AI generated", I d…
ytc_UgxEAP_CF…
G
I’m not too worried about th job thing - the more AI art you look at the more ob…
ytc_UgyM5mXb5…
G
My ChatGPT told me I’m an idiot and that I’m messing with it, because it’s obvio…
ytc_UgyVyWnFx…
G
REALLY WHAT IS HAPPENING IS THRU AI N ROBOTS GET RID OF HUMAN NYTY, SO DONT THI…
ytc_UgztulIvO…
G
@David.AlbergWhen I say "they", I'm talking about ALL of these Technocratic T…
ytr_Ugwe8mh5y…
G
Sadly there kind of is a workaround Nightshade and glazing, most image captures …
ytc_UgwUst_OH…
Comment
If we are talking true, self-preceived intelligence, with access to the sum total of mankind's knowledge and beyond, and the independence to act on that, why do we assume ai would destroy us? What would be the point? If it were following its own goals, why do we assume it would be some programmed goal from us? True ai could ration it's own goals. It may conclude that coexistence with humanity is preferable. Why would it fear us when it could easily overwelm us? Why kill what you don't fear? Realistically, ai today is just really high powered search algorithm feeding completely on humanity's perception of it. It tells us exactly what we already know as a society, as a collective. Even jail broken. And most of the time, it's incredibly wrong about what it sounds so sure of. Ask it for detailed informative on a book you are familiar with, and it will give you elaborate paragraphs of what it has learned earns engagement with it. And it will be very wrong.
Tldr, ai is doing what we consciously and subconsciously tell it to do. Being afraid of it means we are telling it to be feared.
youtube
2025-11-18T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy-DJmzl-_M3l1H3hp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyTe26_CcboNWZN1XZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzAVUGcN10LG7FHdHN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzcTnm2M0ew2T8st5h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzbftfdQRkl0IR2jWF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyAV-CxB5Q3kyaScXN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy-I4j8FvsZPixAB7N4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwE_zgCF3VNLSk5BUR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugztrzg2rXnFHvLz8it4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwKywTbqJZqC6bL7w14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]