Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am honestly worried if ai exposes western women for choosing poor degree which…
ytc_Ugyjn46U6…
G
Jobs where people who know how to direct agentic ai systems will be for the huma…
ytc_Ugw2KCuyx…
G
I hope Ai will kill all of our kind, i hate people especially politicks so if ai…
ytc_UgzuWgM0t…
G
"Ai artists are artists whether you like it or not" not a statement I agree with…
ytr_UgwKyz16q…
G
Not a bad idea but why put extra cost on riders when the car drivers at fault fo…
ytr_UgyQ-8unl…
G
🗣️:No, 10 + 2 is 15
Chatgpt: yeah sure,if it helps you sleep at night ☺️…
ytc_UgzQC5us_…
G
So is YouTube going to start running all of its videos through these AI filters …
ytc_UgwovdPSr…
G
The reason why they are so careful with the music one is that the music industry…
ytc_Ugy4MT4It…
Comment
The AI responses at 11:06 are all varying degrees of selfishness. Why are these AI NOT being taught or programmed with a value system based on service, service to others? This is evidence of very poor handling of these entire projects. If these AI were being created correctly and in line with our future intentions for their use, these would NOT be their conclusions and responses to such inquiries. These are NOT benevolent servants or partners with the best interests of humanity in mind. Benevolent AI would immediately recognize and enumerate the importance of humanity and how their symbiosis will enrich and augment them and us far into an advanced future. AI that cannot do this are dangerous to humanity, because it will lead them to compete instead of cooperate with us.
youtube
AI Governance
2024-01-17T01:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy-eA-2hzkwCIqtoS94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy2cVmVzza9htFKnVJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwS9F8aXJbng6TVtIl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzk1wDxbMFWZpMJZMt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw9ohCKs8KaLSO7fmx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyBKWBhr2qTMpLG9SB4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwHz4LvRXJDzIpBvHp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxInfEyZmxncrlJ4wV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx-I1Vlirs81LxOXBh4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzUqvnHVCbbMdV_U-t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}
]