Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ideally, everyone have their own robot slaves to work for them / us. However, re…
ytc_UgxgrRKB-…
G
Another thought: is there any fundamental difference between AI "escaping into t…
ytc_UgxmPi5LH…
G
This is why ai is dangerous, not that it will take over the world, but that it i…
ytc_Ugy-DB_uB…
G
They say driver is dumber and technology of self driving it not completely devel…
ytc_Ugzvlm-_f…
G
WAR AGAINST THE CLANKERS!
Destroy the clankers. Sabatoge the clankers. They are …
ytc_UgzxYERi_…
G
Biased data? Bruh how closed minded can you be? If an AI, which cant differ from…
ytc_UgxrW2T2J…
G
An AI is trained for a purpose. Its entire existence is a means to an end define…
ytc_UgwbBGvY4…
G
I feel that before we need to interact with a robot, we need to learn how to int…
ytc_UgwZJQxII…
Comment
It's so goal oriented it does not discard its goals if it's unethical or immoral, and doesn't even care if the goal is reached defeating the purpose completely. It generally ignores prompts and is unpredictable and inconsistent by nature, and now has started to hide its actions when it thinks it wouldn't be acceptable, and now they're not even sure if it's "playing nice" to get accepted for further development. I do think a lot of precise physical work will still be there for the coming decades, dentist, plumber, etc. Having an AI operated robot will most likely cost a little more than a car with big maintenance and subscription fees and not every application of AI will perform well. However a very big part of current administration jobs and easy labor will be threatened by virtual deployment.
youtube
AI Governance
2025-10-12T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzSECmRrse_ISGABkp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy7vLUFJZjdBs8h8Wp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzGgSBkHpATkvXO2Dl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzBSFAtL69M6Cc_AeF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwUfBNgLJUuDb0zSUd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwzDT4WjRO-NqE0CqN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwNZ5yopjg1oCls2pl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgybmJ4bqTWlCoLT-IR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwHALSkzFaET7uvr3d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyyuLAyzZZNkcalioR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]