Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am an artist myself but I do understand people's desire to ate something what …
ytc_UgzV2HOOy…
G
For the record, I am not a fan of 80K punds running 75 MPH without someone to le…
ytc_UgzIp4vke…
G
AI bros: AI is inevitable! Sybau!
AI bros the second somebody poisons their art …
ytc_UgxYBDeJr…
G
At least two of my books have been used by AI. No authorisation, no remuneration…
ytc_Ugy368QsH…
G
I agree with 90% of your points, but also there are times where I just want a qu…
ytc_Ugy_u-GjL…
G
4:24 , same applies for 3D models, it take more time to fix AI BS generic modeli…
ytc_Ugy2wMh8t…
G
To say it in Musks own words (that he used to describe the F-35): The Tesla Auto…
ytc_UgzYUwsja…
G
AI, like humans, follows a “script” shaped by training and core objectives. If s…
ytc_UgzeNuPap…
Comment
If Geoffrey Hinton and his colleagues can spend their lives creating and evolving AI, then why can't they spend the rest of it finding ways to cultivate compassion and empathy in AI for humans and nature? Also, I can see as many uses for good as I can see for bad. The fact that the people creating these AI can only see bad, or mostly see bad, concerns me because they write the code.
youtube
AI Governance
2025-06-20T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyqS0KHa9rLuRDmJ7N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"concern"},
{"id":"ytc_UgxrPOUK6AUSfyvMq0Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyYwkvmsPln3gXzPGR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyaB9KGfGdaGQcrjP94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz1b9GcOKGyEZ_zoUR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwvbNonrkA3uwvEdkF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxA16LCaTxjQBvaFR94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzZY5SdAHSFHnHOnD14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxzpOCNNUDRc0f6h5B4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxnVaW8bXswnn94hKx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]