Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No competition. X peng more fluid , casual and realistic, the Tesla robot nee…
ytc_UgxTgtavp…
G
What if the a.i. is listening/watching this video. 😮. We are giving it ideas. 😮.…
ytc_UgwcYhy08…
G
Calling AI similar to a camera or a brush is actually crazy - these things don't…
ytc_UgymRdMcN…
G
Be ready for when a school AI detects a gun as a bag of Doritos…
ytc_UgyPAqich…
G
One time I saw this comment for buddy and said you are the cute robot overlord y…
ytr_UgzC6EAGU…
G
AGI is impossible and always will be. Humans are not God. They will keep trying …
ytc_UgxGnMXP2…
G
I'd love to see this kind of conversation with more points of view. I haven't wa…
ytc_UgyEzIUdv…
G
But companies are still doing it. They don’t want us to use it as a tool. They w…
ytc_UgxNyQ1R6…
Comment
If I would have to compare the danger of AI to a movie terminator wouldn't be my first choice I think I would use Wargames, A Space Odyssey or Upgrade (the behaviour not the physical interventions but although come across interesting for everyone who watched the movie to see the guy from google behave like he does makes me think a lot of this movie!)
youtube
AI Governance
2023-04-18T09:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxaUozZ3YKL7YxT2p94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyxpTY08jrAL26BZ8V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw4HScAn3ssF5aw7kZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwB2SKteKPATMzazbt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyISgMbvzoyWeGM3lN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyAH65Kcj5fkrKljm14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxqJvkpFzIKlCyVNFN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxcmWmTGbrFN1022a14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2uIZqbrEQVmh7-jN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugy7gh9yXDRNg8WI22t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]