Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
20. Truckers need to get their in bread azzes over to the truck lane.
19. When …
ytc_Ugz31IiUS…
G
That’s not the IT industry’s fault. Honestly, you’re an idiot if you replace you…
rdc_n5gq1f7
G
Lies and more lies. If has war or damage, or nations fighting.. isn't about AI, …
ytc_UgyK-ktth…
G
You can't close Pandora's Box. We can't stop AI. But we need to control it. We n…
ytc_UgxGsFh9H…
G
Very disgusting 🫣 😒 😑 😕. Stop embarrassed your beautiful country and remove thi…
ytc_Ugyua53ek…
G
@EVILFREAKINGCATIt really doesn't. Unless the whole model is nigh exclusively …
ytr_UgznTHUy0…
G
What AI generative models do is enable those that are creative (i.e. have the id…
ytc_UgxUJ7qip…
G
why? If there is a conscious AI species demanding rights, why would they have to…
ytr_Ugz2xhIPX…
Comment
Let's say we were back in the Renaissance and the greatest theoreticians then were predicting the impacts of technological advancement. Wouldn't some of them have said that humankind would be destroyed by it? Wouldn't others have said that while it's unpredictable, technological advancement will likely do more good than harm in terms of human potential and population growth. I'm not comfortable with the rapid pace of AI development either, but I think the prediction of what AI impacts will be is not likely human extinction, but more likely the solutions to disease and famine.
youtube
AI Governance
2025-10-15T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy1egUSacGBMPQ4BKV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwetfDxOVuH1vuc9RB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwVP425hKUyxaCzdmh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyI5DPCta6duLKzyr54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwPaYcIVSzYEPNHe6p4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugx0rZRZ9c7FMXTPdsd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzdQEWkOF26mJP8NEB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw_vKorzBDdq9QHXlJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzbaaIIZiBMusgDrkl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugy4LxW4IjIcZHmjdhR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]