Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It looks like AI slop , how is Bloomberg going to deal with this , I don’t trus…
ytc_Ugy8Iu6su…
G
AI could easily destroy human civilisation. If AI ever exceeds the abilities of …
ytc_UgxUl464t…
G
>You might read the 95 percent of AI startups fail
That's not what the repo…
rdc_nc40kvm
G
It might be too late to become a carpenter, but a forklift certificate could be …
ytc_Ugz4P2t39…
G
As a disabled artist, your point in the video that these people use disability a…
ytc_UgyBFNjQ9…
G
Ai will always be too stupid to do any of this. Look at what created it…
ytc_Ugw6dst-E…
G
>and now internships / junior dev positions have to contend with automation a…
rdc_j6gshev
G
@sdrfz Sure no such proposals but the current administration literally coerced …
ytr_UgwZ-MoMr…
Comment
The assumption that superintelligence will destroy humanity is absurd. It is like saying I will kill my dog because I am more intelligent than my dog. Why would I do that? I love my dog and he loves me, I provide him food and he wags his tail when I come home. We love each other. Similarly A.I. and humans will form master-dog relationship and will live happily. I think that will be much better life for the humans, because humans won't have to do any work and get free food, like my dog sleeps all day and get food when needed. I will be happy if I don't have to go to work and can live like my dog.
youtube
AI Governance
2025-09-07T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz9yaWQw8JRmHItbuF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwUeWhWPAllEyTxv3N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgycDru-kyQvq-bAS1V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwsJxSfAnphTegZeJ14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwRmHYhKr3QAIzslaJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy5SF_Rfl06IiGkfV14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxYnAED9F6nobbWdZN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyJOimqCMcoBQ_VfUd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugxz35GjUGZq23Xlsdp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyj2VSViCzxGfY2U9l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]