Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So why does my ChatGPT not allow me to generate images? Is it an older version o…
ytc_Ugx-_w_Ot…
G
About 24:00. Talk of AI being able to do everything a human can do. Humans can l…
ytc_UgwHfOPWI…
G
valid reason, cope harder by making better images. Oh wait, you aren't even allo…
ytr_Ugyf20g9b…
G
Beside the fact that this is dangerous and going to take lives of other drivers …
ytc_UgwNj4Y4z…
G
good thing is, if AI is a slave to logic, only a certain group of humans will b…
ytc_UgwQiCqmB…
G
There will need to be an appreciation that our economy is based on productive wa…
ytc_Ugy3p-KPU…
G
What a great discussion! relating to Asimov's laws of robotics: In my upcoming b…
ytc_Ugwp77NMG…
G
AI and robots. "I need your clothes, your boots and your motorcycle.".
If you ca…
ytc_UgwAaM4c4…
Comment
could it be that AI is currently bad enough be open source, but since its getting better and smarter its getting more and more dangerous in public uncontrolled hands? for example if quantum computers would just be let loose for anyone to buy, it could lead to huge cyberattacks. But if they were terrible enough to not be dangerous at first and their developement just accelerated uncontrollably, it would have a bad outcome for the internet and passwords and stuff.
so to summerize maby AI is harmless enough now, that we trust it in public hands but the acceleration of its developement is catching up to where it could be very dangerous.
youtube
AI Moral Status
2025-11-05T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyNQWlffPiwXII38Ut4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwIWGMqA46eD0_khKV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwbEDqgUurgYiRH-xt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwhHmXr4G28Xx7zA0B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5XBIuUdSqwlGaa-14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx-W_mGG5862d82-OF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjTR0ClrcGZ_Oebwp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzMNuramyz21pKhxAJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyhVfwEzTPiw9VXD1B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVxezbFIcXOeMvwBl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]