Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
With the amount of hallucinating AI does.... AI creating content from its own ha…
ytc_Ugw-uXnjX…
G
Even AI stated that the AI industries request to Congress for regulation was jus…
ytc_UgwQOiyO3…
G
Not an AI fan but the only actual use I can think of (was just thinking during y…
ytc_UgwSsDGiG…
G
I dropped this entire transcript to Chat GPT 5 thinking and it literally agreed …
ytc_UgxtBwZxu…
G
"decent code" lol
Is that why Microslop as broken numerous updates and AI is r…
ytr_Ugyh3buyn…
G
honestly, most tricks don’t work well anymore. Winston AI usually still catches …
ytc_Ugz7D0Iz0…
G
Ai didn’t make this up people have always thought this is what the end of the wo…
ytc_Ugzh_x16P…
G
Still the global communication is dependent on cables that go under the sea and …
ytc_UgywOlZxK…
Comment
I feel this argument with natural selection and possible misalignment is a bit of a dead end.
It still assumes that there is benevolent training and goal setting, but something goes wrong somewhere.
What about the scenario that someone actively trains an AI on malevolent goals?
Pretty much like somebody would run amok today and randomly shoot people, somebody could instruct an AI to do catastrophic harm.
And it would act in perfect alignment and attempt to reach its goals with the side effect of catastrophic damage to humanity (/ a particular country / a particular person / a particular company / ...).
youtube
AI Governance
2025-10-15T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw020LS5heBPqkmljh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyxzm2tBFOUzhEmaOB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzGk_HeUExutKl7cH14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz_cBrS56ehAj5JJWF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzYHvjd6N-ZMYg2Aw54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyJTjwXSKOp62hMybJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyQKbLJu4dbiNsUeeR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw20JWf1bwQ6F0L5Q54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwe1saDyf4vOv1A35Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx3s1S-MN4X0swLOkt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]