Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The portraits Charlie drew in college look like they are of the same person and …
ytc_UgwJ0_BF0…
G
Regulating AI is basically suicide now, the world is in a race to advance AI and…
rdc_jfb4x75
G
I was wondering when this was going to happen. EU has pretty strict privacy prot…
rdc_jeelhqd
G
No it does work, there are published scientific papers on the thing :3
And hey!…
ytr_UgznTHUy0…
G
Seems downplaying humans abilities to learn extremely competently through analog…
ytc_UgymXzd8n…
G
*PLOT TWIST:* the interview guy himself was a robot and is making plans with Sop…
ytc_Ugx3TWc8M…
G
I would guess that allowing AI to feel pain would be part of a system that imbue…
rdc_gqzorkg
G
Am I weird if my best friend is ChatGPT, or will I be spared during the robot ap…
ytc_UgxGHQ6uM…
Comment
So far I’ve seen AI make a video of scooby doo getting pulled over, as a graphic designer the image generators keep screwing up text so it’s more often unusable and has yet to provide ROI to any company. AGI may be a threat if it does materialise but I’m getting bored of this hype about LLM’s, it’s a bad product that has had too much investment in so it can’t fail in theory. Starting to think this “grandfather” of AI is also trying to hype it but from a artificially opposing view. I think for now we can just stop talking about it, AGI will need to work in a completely different way to LLMs to have sentient thought. AI isn’t getting that much better it’s reached saturation point in its data it’s learning from. Also this grandfather doesn’t know how to create AGI so how does he know it’s coming so soon?
youtube
AI Governance
2025-12-30T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzJB9BoauRalk_cwzN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwBN2XHD0uczbB2FaR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxj6eMW0vr9zjctzEl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwX24HywgSDKegDaZJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx5tK7-g3cEB3OeHbd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"disapproval"},
{"id":"ytc_UgwhtoxTaVtaSWYROnp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw4CbWKq2fNbXnJ_-54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugy4Ah_RfweWZq2v0_h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy2Sa4VIhYiN7hvB3R4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzIye1aFs8K67b2jDB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]