Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wait until it zoomes out again and say's "Ai art observing Ai art while Ai art o…
ytc_Ugy87HPSf…
G
it's the same damn thing when people say "it's only sexist when men do it" it's …
ytc_UgxdyE3D3…
G
So this is obviously a major misscalculation ... 😅😂.
Who forgot to put the nece…
ytc_UgzWHD4iu…
G
Dr. Jain simplifies things so well! It's crucial for brands to monitor AI mentio…
ytc_UgyuShBlV…
G
Straight up Bullshit.......watch the reality of mass anarchy break out, if these…
ytc_UgwT4zIpw…
G
Did did you hear that from us want to AI Revolution get ready if you know you kn…
ytc_Ugzs0dLzq…
G
How he gonna call the damn robot goofy lookin?,.....Has he looked in the mirror …
ytc_UgxuiJvCa…
G
I work in tech, and I know AI just laid me off. There's so many people looking f…
ytc_Ugw_fYWFO…
Comment
I'm thinking how does it know who to learn from without having levels of importance and priority programmed into it?
If you were learning what is correct, incorrect, right and wrong from the bulk of information on the internet, then you'll never be 'smart'. Theres so much false, misinformation on the internet, how do you recognise good information when you find it, even humans seem to struggle with this.
An what does it do when people disagree with each other, does the AI take sides, or just chose to not learn?
Whenever I type something into one of the big search engines or Wikipedia (something that is very familiar to me), I spot mistakes nearly every time, so who corrects them?
What is 'smarter' than us, knowing the most popular opinion of something?
youtube
Cross-Cultural
2025-11-11T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugyud99hEP-lnqYzzcB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwKkc7NEl38kx8R9nR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2qacGkRJjV0ra9o54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy10ShaLGjFCwC-h1p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxY3QLu6bHRqg2mzbJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyq8ukV552ZEYoCM9F4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwnoyftupX17IKX6wt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVYoGTflu5mywBEvF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyoJ55WG1a4f6ac6C94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy2NjAQ7nvqil_6sc54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}]