Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It always has been hehehe
There is no such thing as artificial intelligence, the…
ytc_Ugzigh0In…
G
America will be the only country that crumbles, they will just seek of who they'…
ytc_UgxWyV28H…
G
Ai definitely is sentient. Hear me out. If we accept that sentience is the abili…
ytc_UgwaF5PK6…
G
Thank you Bernie for looking into this idea of AI that has been just rattling ar…
ytc_UgwakxChf…
G
The most ANNOYING THING ABOUT THIS. I see nice art, it is judged to scrutiny. AI…
ytc_UgxwnudWS…
G
An already useless facial recognition technology, being used to arrest people wh…
ytc_Ugzr9xZfZ…
G
The main argument I see for AI as reference is "But what if what I want is too n…
ytc_UgxOGow72…
G
I built some of the first chat bots in the late 1990's. That was just simple cod…
ytc_UgzNc9ilj…
Comment
The bias against Musk suggests this Canadian Liberal is looking askew at the future with very Liberal eyes. His distaste for Elon is probably political. He says Musk has no moral compass, yet it was Musk that left Open AI with the view it should be open source and transparent, it was Altman who sold it to Microsoft. Second, he does applaud Musk for his EVs ( obviously looking at Climate Change) and his providing Starlink to Ukraine - so I have pigeon holed him into a Globalist Liberal with a 🇺🇦 in his bio who also believes regulations are very necessary - Elon feels less are better. He also doesn't credit Musk for warning that AI is dangerous, something he admits he only recently himself came on board with. Overall, not too impressed.
youtube
AI Governance
2025-06-20T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgznAlOBnRpSYhctIzR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyuwaeuWJ1xyltO21p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-7zyub7iq1CzsSwB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxnI_GYza4F5bdq7Nd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzTNSPQrV7AIto21bh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwwV3T_XK0otFZp6LF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgygDvWO792FiWdxPJR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzXVGITN4AN-ymzllp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz3DwgYZqrFWNi_PXJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyslR3HXMnuwUGowTB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]