Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I believe that character AI uses the likability of famous people on the internet…
ytc_UgyAFxlUl…
G
Here's a UI system I gamed out with Grok AI today - the final output from Grok:
…
ytc_UgyNB2Yzd…
G
First look at this channel and I’m impressed. After watching the Tucker intervi…
ytc_UgzvY-bZR…
G
Ya, but if AI is uploaded unto a robot, that robot must have the 'mucle' memory …
ytc_Ugxjn6_n_…
G
AI doesn't replicate anything exactly as a human artist made it as a finished wo…
ytr_UgxD8eQBX…
G
TRIGGER WARNING!!
I understand saying AI is not allowed to learn of other artis…
ytc_UgwarUWBa…
G
Goooooood. The AI bubble needed bursting. Those jerks at OpenAI etc were basical…
rdc_m9fky07
G
What I am not certain of is if ai is going to take over but what l am certain of…
ytr_UgxB7Y9xA…
Comment
I think the capabilities of AI are extremely exagerated. While some jobs are at risk, i don't think it's to the extent being claimed. Companies that are actually implementing ai are having to back peddle. They're finding that it can make one person much more efficient, but cannot replace them. If you want to know what ai can actually do, compare their claims to what happens when companies implement it. You'll find some pretty large gaps between claims and actual implementation. Jobs will be reduced, but not completely eliminated.
youtube
AI Governance
2025-09-05T12:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxzDglcJ5BjRhoU1D14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyA9ea24KkhbHGWemB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxKSegBc7MKYnjNK6V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwSZxE5ZG_pQYjg6Lx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzI5ef7Rl_shOp4HTZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxO6l6rs67qXN33yft4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgykuxCsaS1zHj-xr5h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwroszhiZJwg6Uxj9Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw0NRI4FV8epB9FUch4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzoRdWF6xMWKusaHuF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}]