Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My cynical scenario is that AI will be captured by a few elites who already own …
ytc_UgwGMt0F7…
G
They are saying AI would replace the entire work force in 10 years. Are we sure …
ytr_Ugx5HWdW3…
G
Actually no. AI has no autonomy, it has no choice. It does not pick and choose w…
ytr_UgxkfiwJf…
G
I'm not scared i use Ai as a graphic designer use it to your advantage…
ytc_UgwtV_g64…
G
It's a new buzzword every few years. Since I've been in the industry it's been B…
rdc_m82scsp
G
@blueclocks7610 It’s 2025 most kids know about AI
A picture of me riding a drag…
ytr_UgzqWtLV1…
G
'Predictive Policing' is Incredible crimes committed by the police against the p…
ytc_Ugzxqf_qM…
G
The strawman of course being that likening suffering inflicted on people and ani…
ytc_UgwcUCxIf…
Comment
I don't know if I agree that super intelligence is only when they can work things out better than us. These chatbots are getting really good at answering our questions. If it got super smart, it still wouldn't be a threat because it only ever does anything when we ask it a question. If you don't talk to it, it does nothing. A potentially dangerous AI will probably be one we design to be continuously processing and designing it's own goals. i.e. if we are intentionally trying to create a self aware AI.
youtube
AI Moral Status
2025-11-29T09:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwWOIiuRAn2sFnACu54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwEJQgWqnJtBI5LLrp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzaJfH6TyV6NmWFXLl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwcwCiIPqeKIQv97Ix4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwuKRFAy0cKH_Ms3OF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwFM2I10K8wAmCsj5d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwlQ3CAxP5M__IS2jZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw7dgzNeFZzx7aQbKN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxXIeVuNerDGaz9HCt4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugzz2tdw_SDD1OOq_vh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]