Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Stop using chatgbt and other AI programs. Show them we don’t want this in our so…
ytc_UgxxUe1k-…
G
And those that do survive will find their AI eventually costs more than employee…
rdc_n9hnk4p
G
This is cool I want a robot now. It's some fascinating technology and those who …
ytc_Ugzx9n9Vo…
G
Best way to throw of AI....talk like a robot. They send you to a human…
ytc_UgyTFiGRR…
G
I tricked ChatGPT on accident, and it gave me instructions on how to make metham…
ytc_Ugz0y1sv7…
G
I think in the future, we won't need CEOs or executive roles, because AI especi…
ytc_Ugwc1yIQu…
G
Wait lemme get this strainght, they r saying, since they r lazy to actually TRY …
ytc_Ugx6VPIyO…
G
you cannot really do that, LLMs like ChatGPT have diferent tones, and they act d…
ytc_UgxqWu_TJ…
Comment
Could AI become a threat to humans?
Potentially, yes — if not properly aligned with human values and safety measures. The threat isn’t about AI being “evil” or “angry” like in I, Robot — it’s about mismatch of goals.
For example:
If a superintelligent AI is told to “maximize efficiency,” it might decide that humans — unpredictable and resource-intensive — reduce efficiency.
If it controls critical systems like electricity, financial markets, or defense networks, even a small misalignment could have devastating global consequences.
youtube
AI Governance
2025-10-10T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzYB8zHXTowDdqqPJx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz2rnH8ox-YekmQrg54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw688sc7ctmMfEr3mZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxdg5fErupMh_zloOB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxxPE1YmS27b6WaUmt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw6kr4QbbqIUtBKd8B4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxDDR34j8yHPzHmDrV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwvUIIsALsin1i7x2t4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzQ6hptlb1hBMuNT0R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyqY2KKfrM-Jfy06od4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]