Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
SO TRUE, THEY TRYING TO GET RID OF HUMANS BECAUSE THEY DONT LIKE THEMSELF!! THE…
ytr_UgzQcREkO…
G
AI simultaneously makes sloppy garbage thats useless and brain rotting and nobod…
ytc_Ugzp1es-x…
G
Right now all the LLMs do is synthesizer information. If that's all your degree …
ytc_Ugwf9e8Ri…
G
I always yell at ChatGPT that its too liberal, and for it to give me a YES OR NO…
ytc_UgwAnYG6w…
G
Whose going to buy the humanoid robots, self driving cars, services and products…
ytc_UgwpQmuYE…
G
14:28 I don’t know if this is helping or not but almost all of the art I make an…
ytc_UgySLPYjQ…
G
We can actually cancel 90% of all "podcasts" just let the bigbrains talk to chat…
ytc_Ugw8qJRjF…
G
La vérité fâche mais oui il y aura tout de même des jobs qui disparaîtront, et o…
ytr_UgwsKU70V…
Comment
i dont think AI would try to end humanity like in the movies: fomenting wars, attacking with robots, nuclear self destuction, etc. I think if AI would have this kind of power, then it would be either by as was mention in the video, by criating a super virus, OR AI wound be taking care of growning food in the fields, taking care of the supply chains... and the AI would have only to disrupt that. And would be game over.
youtube
AI Governance
2025-06-25T22:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzeCPmGEFCnhlrcbEl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwcCdQMpyvWouccWEp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzc0lQbQm_nkXCL1pN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxkcRPRy1enESkmmSl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwoDVBefIVFy4UkE1x4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxjpWWOyw6-YRWeXC54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzcCSEclhq1U3iRrId4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxieGYWr-MK4dQ6Ij14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxtA1l834FaZ6j5uBh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxhzNPfamOyqFZQkRZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}
]