Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The Tech giants are even turning on each others work and breakthroughs in AI. Ap…
ytc_UgzyNZVtB…
G
Im not allowed to talk about it
But ai is much further than we know and we hum…
ytc_UgxMHp35J…
G
There is one fundamental problem, large language models will not lead to AGI. Un…
ytc_UgzUAihh0…
G
Sounds like AI is training itself on to many soap operas, just like many people …
ytc_UgxT8KAK-…
G
Generated an ai image and then I digitally traced it I realized that the traced …
ytc_UgyEIx8a1…
G
Read the patent. It ABSOLUTELY collects facial biometrics, race and other motor…
ytc_Ugx3vhna0…
G
I don't think LLM's should be the baseline for AI. LLM's are pretty much dead-en…
ytc_Ugxs5MjN6…
G
I can’t wait until we see cyber trucks on this Chanel and the USA could use the …
ytc_UgzrHuYRD…
Comment
The point about aspiring writers not needing to worry because we are always going to want human-written books is too naive. At best we will have AI-written books that are marketed and sold as if they were written by a real author. There will be no feasible way to guarantee that a book was written by a real human when AI gets good enough at it, especially when human authors can use AI to write large portions of their book anyway.
youtube
2025-07-11T19:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzhvAz-v9kxUFq-Pa94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwYJxW12IobZQUdqdB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxNtxvWMe8YUMoSxIp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx0EcnzDophO8Dizu54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgznU3LbK4DV4NAu2_t4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_Ugyl4nyMcA9u4O7gV_14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwUg5UMDmfIZjREILV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugzw6OFPEeJHBE9BaIR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxCbQDsF63sGoL07Mt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy-lRONqN7XT7CHlVh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]