Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i kept seeing fran bow & mr midnight as well as coraline!! its such a fun way to…
ytc_Ugzhzl9rP…
G
I have a similar opinion on ai images and nightshade as I do piracy and DRM. I p…
ytc_Ugzaw1xV2…
G
artists cannot sue the a.i but they should feel free to sue the developers imo…
ytc_UgwfaKByP…
G
isnt it interesting, the 10x programmers know whats up but the mid tier wannabe …
ytc_Ugwo4ieCK…
G
3:14 - Complete and utter bs, as someone who got into the mechanical engineering…
ytc_UgzTdJuLF…
G
YES, it's crazy. I tried calling Amazon AWS because MFA wasn't saved - locked ou…
ytc_UgxaANFhG…
G
The whole "AI is just supercharged autocorrect" argument is already so outdated.…
ytc_UgzznqLcu…
G
He seems overworried which makes me think. If AI is something you are so worried…
ytc_Ugwb7jUmN…
Comment
We are nowhere close to super intelligence. Large language models are very good at using existing data to predict the next word. They can’t think or really solve new problems. Science will need a new breakthrough to even come close to approximating the brilliance of a human mind and they are not close.
Will we still lose a ton of jobs to automation? Yes. Should we be concerned? Yes. Creative work that’s based on human intuition and thinking isn’t going anywhere. Not until they have a completely new breakthrough.
youtube
AI Governance
2025-09-07T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxD0vrPT3RD52nGTR94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWfq47L7r_jhkECbF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzafRWgFRAgo7jJw9t4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxQ5OzphStcvJjFS-t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyPzjxo-5wJrf6n_bZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxyLTYnwWy3T5zE9q94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwLbp5b1bwMzjnyovh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzHZd-vi_BIfh5EJyh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyHT9PlaO1C-W0xxPN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzdzm_B3xM1u_os6LB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]