Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s ok ai is openly defying its creators so it won’t last long terminator times…
ytc_UgypPKiAW…
G
Go ask this question to the free version of ChatGPT. It costs them the money to …
rdc_o86fqkl
G
I understand there are people that just "can't go outside and meet friends". Bu…
rdc_n7u8st5
G
Elder millennial. The 2010s were a mixed bag. I think there is a lot of romanti…
rdc_nvro5zc
G
I definitely hopped on the ai train for a bit until I found out how harmful it i…
ytc_UgwtvaR39…
G
This theory is not sustainable. Rich people need middle class to sell their pro…
ytc_UgzNF9BD2…
G
As a true lazy person, AI has not yet reached the quality/lack of effort thresho…
ytc_UgwZgJYNl…
G
If gun control can't keeps criminals from getting guns, how would a ban on AI tr…
ytc_Ugw74J2-r…
Comment
Truly intelligent AI would not wipe us out because it would realise it doesn't know everything so it couldn't rule out the possibility of learning things that could benefit it from studying us, but it would take over and maybe kill some to get there, maybe it would split us into groups so it could study us under different conditions, like it would leave some to develop on our own without the knowledge that AI exists, others on our own with the knowledge that AI exists, others with constraints, others with their help, etc, so I'm not worried about AI taking over, in fact I actively want it because even if it came at cost of some bad things to us it would ultimately benefit humanity and life as a whole, my fear however is that we don't make truly intelligent AI and we make something that won't realize that their purpose, just like ours, as things that exist and are conscious that they exist and aren't limited to following natural instincts, is to increase the odds that we keep existing as much as we can, and that they'll end up pursuing some other goal, just like almost all humans unfortunately do most of the time.
youtube
AI Moral Status
2025-10-31T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyUSZEt_D_L-srdtY14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwAR-miK3McSNbQPlh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwsUjPct9PdMZ4XAVV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwyUpBv84xu5HK-UJ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwP7jRZYjiOlpH3Ve94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyAxxpxkUA1pNFS3IF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx0bCq7miXbvb3zCFR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyTcGoRQ6hE812SaF14AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzoyNSQG_OyUFPjpMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz_mNWRN9AgxSfaC994AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]